Test Report: KVM_Linux_containerd 12230

                    
                      b85c4fe0fcec6d00161b49ecbfd8182c89122b1a:2021-08-16:20050
                    
                

Test fail (9/269)

x
+
TestPause/serial/SecondStartNoReconfiguration (50.84s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210816222224-6986 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0816 22:24:31.879151    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210816222224-6986 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (45.31906007s)
pause_test.go:97: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-20210816222224-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-20210816222224-6986 in cluster pause-20210816222224-6986
	* Updating the running kvm2 "pause-20210816222224-6986" VM ...
	* Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	* Done! kubectl is now configured to use "pause-20210816222224-6986" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 22:24:28.350080   10732 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:24:28.350178   10732 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:24:28.350184   10732 out.go:311] Setting ErrFile to fd 2...
	I0816 22:24:28.350188   10732 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:24:28.350318   10732 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:24:28.350597   10732 out.go:305] Setting JSON to false
	I0816 22:24:28.397522   10732 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":4030,"bootTime":1629148638,"procs":186,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:24:28.397643   10732 start.go:121] virtualization: kvm guest
	I0816 22:24:28.400445   10732 out.go:177] * [pause-20210816222224-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:24:28.402081   10732 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:24:28.400587   10732 notify.go:169] Checking for updates...
	I0816 22:24:28.403507   10732 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:24:28.405334   10732 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:24:28.406829   10732 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:24:28.407374   10732 config.go:177] Loaded profile config "pause-20210816222224-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:24:28.408014   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:24:28.408073   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:24:28.422487   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33721
	I0816 22:24:28.423359   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:24:28.423998   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:24:28.424017   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:24:28.424398   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:24:28.424556   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:24:28.424720   10732 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:24:28.425044   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:24:28.425081   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:24:28.437710   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43927
	I0816 22:24:28.438223   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:24:28.438755   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:24:28.438782   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:24:28.439150   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:24:28.439314   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:24:28.472803   10732 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 22:24:28.472832   10732 start.go:278] selected driver: kvm2
	I0816 22:24:28.472838   10732 start.go:751] validating driver "kvm2" against &{Name:pause-20210816222224-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.21.3 ClusterName:pause-20210816222224-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.226 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:24:28.472976   10732 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 22:24:28.473768   10732 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:24:28.473947   10732 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 22:24:28.485649   10732 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0816 22:24:28.486587   10732 cni.go:93] Creating CNI manager for ""
	I0816 22:24:28.486610   10732 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:24:28.486620   10732 start_flags.go:277] config:
	{Name:pause-20210816222224-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:pause-20210816222224-6986 Namespace:default APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.226 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:24:28.486751   10732 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:24:28.488623   10732 out.go:177] * Starting control plane node pause-20210816222224-6986 in cluster pause-20210816222224-6986
	I0816 22:24:28.488645   10732 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:24:28.488668   10732 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0816 22:24:28.488682   10732 cache.go:56] Caching tarball of preloaded images
	I0816 22:24:28.488785   10732 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 22:24:28.488809   10732 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0816 22:24:28.488949   10732 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/config.json ...
	I0816 22:24:28.489143   10732 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:24:28.489168   10732 start.go:313] acquiring machines lock for pause-20210816222224-6986: {Name:mk808edd60d1305a42bb85791729eff4573dbb15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 22:24:32.433403   10732 start.go:317] acquired machines lock for "pause-20210816222224-6986" in 3.944203848s
	I0816 22:24:32.433444   10732 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:24:32.433452   10732 fix.go:55] fixHost starting: 
	I0816 22:24:32.433902   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:24:32.433953   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:24:32.448295   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43587
	I0816 22:24:32.448711   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:24:32.449167   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:24:32.449191   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:24:32.449587   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:24:32.449791   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:24:32.449957   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
	I0816 22:24:32.453151   10732 fix.go:108] recreateIfNeeded on pause-20210816222224-6986: state=Running err=<nil>
	W0816 22:24:32.453194   10732 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:24:32.498470   10732 out.go:177] * Updating the running kvm2 "pause-20210816222224-6986" VM ...
	I0816 22:24:32.498528   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:24:32.498781   10732 machine.go:88] provisioning docker machine ...
	I0816 22:24:32.498811   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:24:32.499018   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetMachineName
	I0816 22:24:32.499197   10732 buildroot.go:166] provisioning hostname "pause-20210816222224-6986"
	I0816 22:24:32.499216   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetMachineName
	I0816 22:24:32.499398   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
	I0816 22:24:32.504997   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:24:32.505377   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
	I0816 22:24:32.505411   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:24:32.505506   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
	I0816 22:24:32.505645   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
	I0816 22:24:32.505780   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
	I0816 22:24:32.505938   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
	I0816 22:24:32.506090   10732 main.go:130] libmachine: Using SSH client type: native
	I0816 22:24:32.506237   10732 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.50.226 22 <nil> <nil>}
	I0816 22:24:32.506251   10732 main.go:130] libmachine: About to run SSH command:
	sudo hostname pause-20210816222224-6986 && echo "pause-20210816222224-6986" | sudo tee /etc/hostname
	I0816 22:24:32.654222   10732 main.go:130] libmachine: SSH cmd err, output: <nil>: pause-20210816222224-6986
	
	I0816 22:24:32.654253   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
	I0816 22:24:32.659650   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:24:32.659996   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
	I0816 22:24:32.660024   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:24:32.660223   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
	I0816 22:24:32.660420   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
	I0816 22:24:32.660598   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
	I0816 22:24:32.660744   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
	I0816 22:24:32.660930   10732 main.go:130] libmachine: Using SSH client type: native
	I0816 22:24:32.661075   10732 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.50.226 22 <nil> <nil>}
	I0816 22:24:32.661094   10732 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-20210816222224-6986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-20210816222224-6986/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-20210816222224-6986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:24:32.775636   10732 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:24:32.775672   10732 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:24:32.775709   10732 buildroot.go:174] setting up certificates
	I0816 22:24:32.775724   10732 provision.go:83] configureAuth start
	I0816 22:24:32.775739   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetMachineName
	I0816 22:24:32.776029   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetIP
	I0816 22:24:32.781138   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:24:32.781410   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
	I0816 22:24:32.781440   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:24:32.781566   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
	I0816 22:24:32.785841   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:24:32.786169   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
	I0816 22:24:32.786197   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:24:32.786258   10732 provision.go:138] copyHostCerts
	I0816 22:24:32.786331   10732 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:24:32.786340   10732 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:24:32.786389   10732 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:24:32.786478   10732 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:24:32.786489   10732 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:24:32.786511   10732 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
	I0816 22:24:32.786585   10732 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:24:32.786596   10732 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:24:32.786615   10732 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:24:32.786689   10732 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.pause-20210816222224-6986 san=[192.168.50.226 192.168.50.226 localhost 127.0.0.1 minikube pause-20210816222224-6986]
	I0816 22:24:32.861088   10732 provision.go:172] copyRemoteCerts
	I0816 22:24:32.861140   10732 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:24:32.861160   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
	I0816 22:24:32.866155   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:24:32.866454   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
	I0816 22:24:32.866484   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:24:32.866640   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
	I0816 22:24:32.866811   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
	I0816 22:24:32.866962   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
	I0816 22:24:32.867131   10732 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
	I0816 22:24:32.959873   10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:24:32.978489   10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0816 22:24:32.998817   10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:24:33.017278   10732 provision.go:86] duration metric: configureAuth took 241.540897ms
	I0816 22:24:33.017298   10732 buildroot.go:189] setting minikube options for container-runtime
	I0816 22:24:33.017455   10732 config.go:177] Loaded profile config "pause-20210816222224-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:24:33.017470   10732 machine.go:91] provisioned docker machine in 518.670607ms
	I0816 22:24:33.017479   10732 start.go:267] post-start starting for "pause-20210816222224-6986" (driver="kvm2")
	I0816 22:24:33.017487   10732 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:24:33.017515   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:24:33.017803   10732 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:24:33.017835   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
	I0816 22:24:33.023174   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:24:33.023519   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
	I0816 22:24:33.023540   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:24:33.023716   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
	I0816 22:24:33.023865   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
	I0816 22:24:33.023996   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
	I0816 22:24:33.024116   10732 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
	I0816 22:24:33.112122   10732 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:24:33.117676   10732 info.go:137] Remote host: Buildroot 2020.02.12
	I0816 22:24:33.117704   10732 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:24:33.117767   10732 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:24:33.117906   10732 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0816 22:24:33.118027   10732 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:24:33.126187   10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:24:33.144173   10732 start.go:270] post-start completed in 126.681216ms
	I0816 22:24:33.144218   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:24:33.144463   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
	I0816 22:24:33.150202   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:24:33.150521   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
	I0816 22:24:33.150577   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:24:33.150673   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
	I0816 22:24:33.150828   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
	I0816 22:24:33.151004   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
	I0816 22:24:33.151164   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
	I0816 22:24:33.151325   10732 main.go:130] libmachine: Using SSH client type: native
	I0816 22:24:33.151506   10732 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.50.226 22 <nil> <nil>}
	I0816 22:24:33.151523   10732 main.go:130] libmachine: About to run SSH command:
	date +%s.%N
	I0816 22:24:33.272329   10732 main.go:130] libmachine: SSH cmd err, output: <nil>: 1629152673.272261761
	
	I0816 22:24:33.272351   10732 fix.go:212] guest clock: 1629152673.272261761
	I0816 22:24:33.272366   10732 fix.go:225] Guest: 2021-08-16 22:24:33.272261761 +0000 UTC Remote: 2021-08-16 22:24:33.144446757 +0000 UTC m=+4.850018396 (delta=127.815004ms)
	I0816 22:24:33.272386   10732 fix.go:196] guest clock delta is within tolerance: 127.815004ms
	I0816 22:24:33.272393   10732 fix.go:57] fixHost completed within 838.941925ms
	I0816 22:24:33.272399   10732 start.go:80] releasing machines lock for "pause-20210816222224-6986", held for 838.968464ms
	I0816 22:24:33.272434   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:24:33.272656   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetIP
	I0816 22:24:33.277736   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:24:33.278030   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
	I0816 22:24:33.278065   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:24:33.278165   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:24:33.278332   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:24:33.278753   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:24:33.278995   10732 ssh_runner.go:149] Run: systemctl --version
	I0816 22:24:33.279010   10732 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:24:33.279024   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
	I0816 22:24:33.279040   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
	I0816 22:24:33.285116   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:24:33.285541   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
	I0816 22:24:33.285593   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:24:33.285633   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
	I0816 22:24:33.285788   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
	I0816 22:24:33.285924   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
	I0816 22:24:33.286058   10732 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
	I0816 22:24:33.286534   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:24:33.286892   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
	I0816 22:24:33.286927   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:24:33.287075   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
	I0816 22:24:33.287218   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
	I0816 22:24:33.287346   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
	I0816 22:24:33.287452   10732 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
	I0816 22:24:33.391442   10732 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:24:33.391556   10732 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:24:33.445030   10732 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:24:33.445056   10732 containerd.go:517] Images already preloaded, skipping extraction
	I0816 22:24:33.445118   10732 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0816 22:24:33.458484   10732 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0816 22:24:33.469602   10732 docker.go:153] disabling docker service ...
	I0816 22:24:33.469659   10732 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:24:33.482924   10732 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:24:33.494359   10732 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:24:33.656831   10732 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:24:33.841381   10732 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:24:33.852674   10732 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:24:33.865658   10732 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY29udGFpbmV
yZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kIgogICAgICB
jb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0816 22:24:33.879242   10732 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:24:33.885420   10732 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:24:33.892178   10732 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:24:34.088915   10732 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:24:34.158425   10732 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0816 22:24:34.158486   10732 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:24:34.164509   10732 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0816 22:24:35.269953   10732 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:24:35.276600   10732 retry.go:31] will retry after 2.160763633s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0816 22:24:37.438899   10732 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:24:37.445942   10732 start.go:413] Will wait 60s for crictl version
	I0816 22:24:37.446003   10732 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:24:37.491525   10732 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0816 22:24:37.491588   10732 ssh_runner.go:149] Run: containerd --version
	I0816 22:24:37.542241   10732 ssh_runner.go:149] Run: containerd --version
	I0816 22:24:37.734298   10732 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0816 22:24:37.734354   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetIP
	I0816 22:24:37.741113   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:24:37.741570   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
	I0816 22:24:37.741607   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:24:37.741923   10732 ssh_runner.go:149] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 22:24:37.748812   10732 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:24:37.748875   10732 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:24:37.796309   10732 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:24:37.796331   10732 containerd.go:517] Images already preloaded, skipping extraction
	I0816 22:24:37.796387   10732 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:24:37.894057   10732 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:24:37.894088   10732 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:24:37.894144   10732 ssh_runner.go:149] Run: sudo crictl info
	I0816 22:24:37.958935   10732 cni.go:93] Creating CNI manager for ""
	I0816 22:24:37.958981   10732 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:24:37.958992   10732 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:24:37.959007   10732 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.226 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-20210816222224-6986 NodeName:pause-20210816222224-6986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.226"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.50.226 CgroupDriver:cgroupfs ClientCAFile:
/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:24:37.959160   10732 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.226
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "pause-20210816222224-6986"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.226
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.226"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:24:37.959268   10732 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=pause-20210816222224-6986 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.226 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:pause-20210816222224-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:24:37.959328   10732 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0816 22:24:37.985980   10732 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:24:37.986061   10732 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:24:37.996940   10732 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (541 bytes)
	I0816 22:24:38.017786   10732 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 22:24:38.048096   10732 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2081 bytes)
	I0816 22:24:38.071876   10732 ssh_runner.go:149] Run: grep 192.168.50.226	control-plane.minikube.internal$ /etc/hosts
	I0816 22:24:38.079162   10732 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986 for IP: 192.168.50.226
	I0816 22:24:38.079218   10732 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:24:38.079238   10732 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:24:38.079308   10732 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/client.key
	I0816 22:24:38.079331   10732 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/apiserver.key.5cb43a24
	I0816 22:24:38.079351   10732 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/proxy-client.key
	I0816 22:24:38.079516   10732 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem (1338 bytes)
	W0816 22:24:38.079572   10732 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
	I0816 22:24:38.079585   10732 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 22:24:38.079623   10732 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:24:38.079673   10732 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:24:38.079714   10732 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1675 bytes)
	I0816 22:24:38.079784   10732 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:24:38.081082   10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:24:38.125511   10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 22:24:38.161327   10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:24:38.200447   10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 22:24:38.228844   10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:24:38.316480   10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 22:24:38.353700   10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:24:38.400715   10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 22:24:38.453300   10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
	I0816 22:24:38.483284   10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
	I0816 22:24:38.518787   10732 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:24:38.549501   10732 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:24:38.577939   10732 ssh_runner.go:149] Run: openssl version
	I0816 22:24:38.593224   10732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
	I0816 22:24:38.657361   10732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6986.pem
	I0816 22:24:38.673607   10732 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:49 /usr/share/ca-certificates/6986.pem
	I0816 22:24:38.673678   10732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
	I0816 22:24:38.702412   10732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
	I0816 22:24:38.726832   10732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
	I0816 22:24:38.759604   10732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/69862.pem
	I0816 22:24:38.776280   10732 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:49 /usr/share/ca-certificates/69862.pem
	I0816 22:24:38.776363   10732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
	I0816 22:24:38.809703   10732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:24:38.843479   10732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:24:38.879490   10732 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:24:38.907339   10732 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:42 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:24:38.907406   10732 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:24:38.932662   10732 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:24:38.972753   10732 kubeadm.go:390] StartCluster: {Name:pause-20210816222224-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 Clus
terName:pause-20210816222224-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.226 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:24:38.972868   10732 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0816 22:24:38.972930   10732 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:24:39.209329   10732 cri.go:76] found id: "28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0"
	I0816 22:24:39.209356   10732 cri.go:76] found id: "a8503bd796d5d979a6e1b8b5154986e8b77de391b4f091211451ea5f52808e52"
	I0816 22:24:39.209363   10732 cri.go:76] found id: "124fa393359f758ea47161b345d2cab4ce486d4473a4caad483449464d44315f"
	I0816 22:24:39.209369   10732 cri.go:76] found id: "8710cefecdbe5d31cd44e9ae3378bc08cbc56001326a1cb38026755196cac7d1"
	I0816 22:24:39.209376   10732 cri.go:76] found id: "7dbd1fc92c3753c4757164cea54536cf62394c731fffd8fb94124eea6f32138f"
	I0816 22:24:39.209383   10732 cri.go:76] found id: "38dc61b214a9cbd019de4ca9ab52fb6baf728336de6d715df22b027522ad8b20"
	I0816 22:24:39.209388   10732 cri.go:76] found id: ""
	I0816 22:24:39.209439   10732 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0816 22:24:39.312395   10732 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d","pid":4260,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d/rootfs","created":"2021-08-16T22:24:38.416064875Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb","pid":4355,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b9459ff3a0d8af1ce5
d825b39ad18d0d0623500b3c2d28ac97318b178030cfb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb/rootfs","created":"2021-08-16T22:24:38.822262087Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03","pid":4283,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03/rootfs","created":"2021-08-16T22:24:38.475165162Z","annotations":{"io.kubernetes.cri.container-type":"sa
ndbox","io.kubernetes.cri.sandbox-id":"feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c"},"owner":"root"}]
	I0816 22:24:39.312540   10732 cri.go:113] list returned 3 containers
	I0816 22:24:39.312557   10732 cri.go:116] container: {ID:1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d Status:running}
	I0816 22:24:39.312575   10732 cri.go:118] skipping 1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d - not in ps
	I0816 22:24:39.312586   10732 cri.go:116] container: {ID:3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb Status:created}
	I0816 22:24:39.312595   10732 cri.go:118] skipping 3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb - not in ps
	I0816 22:24:39.312600   10732 cri.go:116] container: {ID:feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03 Status:created}
	I0816 22:24:39.312608   10732 cri.go:118] skipping feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03 - not in ps
	I0816 22:24:39.312654   10732 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:24:39.330613   10732 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:24:39.330640   10732 kubeadm.go:600] restartCluster start
	I0816 22:24:39.330714   10732 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:24:39.355170   10732 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:24:39.356620   10732 kubeconfig.go:93] found "pause-20210816222224-6986" server: "https://192.168.50.226:8443"
	I0816 22:24:39.357673   10732 kapi.go:59] client config for pause-20210816222224-6986: &rest.Config{Host:"https://192.168.50.226:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 22:24:39.360056   10732 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:24:39.378189   10732 api_server.go:164] Checking apiserver status ...
	I0816 22:24:39.378254   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:24:39.397531   10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:24:39.597857   10732 api_server.go:164] Checking apiserver status ...
	I0816 22:24:39.597937   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:24:39.612958   10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:24:39.798158   10732 api_server.go:164] Checking apiserver status ...
	I0816 22:24:39.798265   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:24:39.813430   10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:24:39.997695   10732 api_server.go:164] Checking apiserver status ...
	I0816 22:24:39.997763   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:24:40.010358   10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:24:40.198643   10732 api_server.go:164] Checking apiserver status ...
	I0816 22:24:40.198727   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:24:40.214548   10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:24:40.397717   10732 api_server.go:164] Checking apiserver status ...
	I0816 22:24:40.397785   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:24:40.411349   10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:24:40.598597   10732 api_server.go:164] Checking apiserver status ...
	I0816 22:24:40.598668   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:24:40.615618   10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:24:40.797999   10732 api_server.go:164] Checking apiserver status ...
	I0816 22:24:40.798110   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:24:40.810917   10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:24:40.998241   10732 api_server.go:164] Checking apiserver status ...
	I0816 22:24:40.998309   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:24:41.008263   10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:24:41.198400   10732 api_server.go:164] Checking apiserver status ...
	I0816 22:24:41.198478   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:24:41.209207   10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:24:41.398411   10732 api_server.go:164] Checking apiserver status ...
	I0816 22:24:41.398495   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:24:41.409500   10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:24:41.597769   10732 api_server.go:164] Checking apiserver status ...
	I0816 22:24:41.597829   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:24:41.611018   10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:24:41.798345   10732 api_server.go:164] Checking apiserver status ...
	I0816 22:24:41.798446   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:24:41.811463   10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:24:41.997819   10732 api_server.go:164] Checking apiserver status ...
	I0816 22:24:41.997895   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:24:42.012473   10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:24:42.197709   10732 api_server.go:164] Checking apiserver status ...
	I0816 22:24:42.197799   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:24:42.210959   10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:24:42.398113   10732 api_server.go:164] Checking apiserver status ...
	I0816 22:24:42.398196   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:24:42.408454   10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:24:42.408483   10732 api_server.go:164] Checking apiserver status ...
	I0816 22:24:42.408544   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:24:42.417898   10732 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:24:42.417958   10732 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:24:42.417969   10732 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:24:42.417982   10732 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:24:42.418043   10732 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:24:42.458578   10732 cri.go:76] found id: "28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0"
	I0816 22:24:42.458609   10732 cri.go:76] found id: "a8503bd796d5d979a6e1b8b5154986e8b77de391b4f091211451ea5f52808e52"
	I0816 22:24:42.458616   10732 cri.go:76] found id: "124fa393359f758ea47161b345d2cab4ce486d4473a4caad483449464d44315f"
	I0816 22:24:42.458622   10732 cri.go:76] found id: "8710cefecdbe5d31cd44e9ae3378bc08cbc56001326a1cb38026755196cac7d1"
	I0816 22:24:42.458628   10732 cri.go:76] found id: "7dbd1fc92c3753c4757164cea54536cf62394c731fffd8fb94124eea6f32138f"
	I0816 22:24:42.458634   10732 cri.go:76] found id: "38dc61b214a9cbd019de4ca9ab52fb6baf728336de6d715df22b027522ad8b20"
	I0816 22:24:42.458639   10732 cri.go:76] found id: ""
	I0816 22:24:42.458646   10732 cri.go:221] Stopping containers: [28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0 a8503bd796d5d979a6e1b8b5154986e8b77de391b4f091211451ea5f52808e52 124fa393359f758ea47161b345d2cab4ce486d4473a4caad483449464d44315f 8710cefecdbe5d31cd44e9ae3378bc08cbc56001326a1cb38026755196cac7d1 7dbd1fc92c3753c4757164cea54536cf62394c731fffd8fb94124eea6f32138f 38dc61b214a9cbd019de4ca9ab52fb6baf728336de6d715df22b027522ad8b20]
	I0816 22:24:42.458716   10732 ssh_runner.go:149] Run: which crictl
	I0816 22:24:42.464088   10732 ssh_runner.go:149] Run: sudo /bin/crictl stop 28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0 a8503bd796d5d979a6e1b8b5154986e8b77de391b4f091211451ea5f52808e52 124fa393359f758ea47161b345d2cab4ce486d4473a4caad483449464d44315f 8710cefecdbe5d31cd44e9ae3378bc08cbc56001326a1cb38026755196cac7d1 7dbd1fc92c3753c4757164cea54536cf62394c731fffd8fb94124eea6f32138f 38dc61b214a9cbd019de4ca9ab52fb6baf728336de6d715df22b027522ad8b20
	I0816 22:24:42.528840   10732 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:24:42.578695   10732 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:24:42.590731   10732 kubeadm.go:154] found existing configuration files:
	-rw------- 1 root root 5643 Aug 16 22:23 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Aug 16 22:23 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Aug 16 22:23 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5602 Aug 16 22:23 /etc/kubernetes/scheduler.conf
	
	I0816 22:24:42.590803   10732 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0816 22:24:42.598201   10732 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0816 22:24:42.605364   10732 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0816 22:24:42.611530   10732 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:24:42.611583   10732 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0816 22:24:42.618324   10732 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0816 22:24:42.626604   10732 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:24:42.626656   10732 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0816 22:24:42.633541   10732 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:24:42.642260   10732 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:24:42.642284   10732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:24:42.836959   10732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:24:43.629478   10732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:24:46.327994   10732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:24:46.444335   10732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:24:46.599493   10732 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:24:46.599562   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:24:47.112726   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:24:47.612988   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:24:48.112162   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:24:48.612444   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:24:49.757387   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:24:50.113160   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:24:50.612450   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:24:51.112276   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:24:51.613162   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:24:52.112539   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:24:52.612410   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:24:53.112758   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:24:53.612575   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:24:53.632520   10732 api_server.go:70] duration metric: took 7.033030474s to wait for apiserver process to appear ...
	I0816 22:24:53.632561   10732 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:24:53.632570   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:24:53.633109   10732 api_server.go:255] stopped: https://192.168.50.226:8443/healthz: Get "https://192.168.50.226:8443/healthz": dial tcp 192.168.50.226:8443: connect: connection refused
	I0816 22:24:54.133848   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:24:59.090396   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:24:59.090431   10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:24:59.133677   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:24:59.161347   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:24:59.161378   10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:24:59.633911   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:24:59.639524   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:24:59.639548   10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:25:00.133775   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:25:00.151749   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:25:00.151784   10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:25:00.633968   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:25:00.646578   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 200:
	ok
	I0816 22:25:00.661937   10732 api_server.go:139] control plane version: v1.21.3
	I0816 22:25:00.661961   10732 api_server.go:129] duration metric: took 7.029396002s to wait for apiserver health ...
	I0816 22:25:00.661972   10732 cni.go:93] Creating CNI manager for ""
	I0816 22:25:00.661979   10732 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:25:00.663954   10732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:25:00.664005   10732 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:25:00.674379   10732 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:25:00.699896   10732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:25:00.718704   10732 system_pods.go:59] 6 kube-system pods found
	I0816 22:25:00.718763   10732 system_pods.go:61] "coredns-558bd4d5db-gkxhz" [5aa76749-775e-423d-bbf9-680a20a27051] Running
	I0816 22:25:00.718780   10732 system_pods.go:61] "etcd-pause-20210816222224-6986" [f621b99e-0604-4bed-8c4e-4f5741e52f7b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 22:25:00.718802   10732 system_pods.go:61] "kube-apiserver-pause-20210816222224-6986" [b1c46709-4b0b-4c9c-a701-d595a58214ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 22:25:00.718811   10732 system_pods.go:61] "kube-controller-manager-pause-20210816222224-6986" [777c035e-5f34-469a-afb5-4f8ef90ccbfb] Running
	I0816 22:25:00.718819   10732 system_pods.go:61] "kube-proxy-7l59t" [3c0e0899-31c1-477a-a6d4-2844091deea2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:25:00.718830   10732 system_pods.go:61] "kube-scheduler-pause-20210816222224-6986" [6b32acf9-8108-45a6-901e-70cd125190f8] Running
	I0816 22:25:00.718838   10732 system_pods.go:74] duration metric: took 18.921493ms to wait for pod list to return data ...
	I0816 22:25:00.718847   10732 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:25:00.723789   10732 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:25:00.723820   10732 node_conditions.go:123] node cpu capacity is 2
	I0816 22:25:00.723836   10732 node_conditions.go:105] duration metric: took 4.978152ms to run NodePressure ...
	I0816 22:25:00.723854   10732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:01.396623   10732 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:25:01.403109   10732 kubeadm.go:746] kubelet initialised
	I0816 22:25:01.403139   10732 kubeadm.go:747] duration metric: took 6.492031ms waiting for restarted kubelet to initialise ...
	I0816 22:25:01.403151   10732 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:25:01.409386   10732 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:03.432924   10732 pod_ready.go:102] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:05.435685   10732 pod_ready.go:102] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:05.951433   10732 pod_ready.go:92] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:05.951457   10732 pod_ready.go:81] duration metric: took 4.542029801s waiting for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:05.951470   10732 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:06.969870   10732 pod_ready.go:92] pod "etcd-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:06.969903   10732 pod_ready.go:81] duration metric: took 1.018424787s waiting for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:06.969918   10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:06.978963   10732 pod_ready.go:92] pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:06.978984   10732 pod_ready.go:81] duration metric: took 9.058114ms waiting for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:06.978997   10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:09.000201   10732 pod_ready.go:102] pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:10.499577   10732 pod_ready.go:92] pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:10.499613   10732 pod_ready.go:81] duration metric: took 3.520603411s waiting for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.499631   10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.508715   10732 pod_ready.go:92] pod "kube-proxy-7l59t" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:10.508738   10732 pod_ready.go:81] duration metric: took 9.098529ms waiting for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.508749   10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.514516   10732 pod_ready.go:92] pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:10.514536   10732 pod_ready.go:81] duration metric: took 5.779042ms waiting for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.514546   10732 pod_ready.go:38] duration metric: took 9.111379533s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:25:10.514567   10732 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:25:10.530219   10732 ops.go:34] apiserver oom_adj: -16
	I0816 22:25:10.530242   10732 kubeadm.go:604] restartCluster took 31.19958524s
	I0816 22:25:10.530251   10732 kubeadm.go:392] StartCluster complete in 31.557512009s
	I0816 22:25:10.530271   10732 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:25:10.530404   10732 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:10.531238   10732 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:25:10.532000   10732 kapi.go:59] client config for pause-20210816222224-6986: &rest.Config{Host:"https://192.168.50.226:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 22:25:10.647656   10732 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210816222224-6986" rescaled to 1
	I0816 22:25:10.647728   10732 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.50.226 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:25:10.647757   10732 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:25:10.647794   10732 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0816 22:25:10.649327   10732 out.go:177] * Verifying Kubernetes components...
	I0816 22:25:10.649398   10732 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:25:10.647852   10732 addons.go:59] Setting storage-provisioner=true in profile "pause-20210816222224-6986"
	I0816 22:25:10.647862   10732 addons.go:59] Setting default-storageclass=true in profile "pause-20210816222224-6986"
	I0816 22:25:10.647991   10732 config.go:177] Loaded profile config "pause-20210816222224-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:25:10.649480   10732 addons.go:135] Setting addon storage-provisioner=true in "pause-20210816222224-6986"
	W0816 22:25:10.649500   10732 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:25:10.649516   10732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210816222224-6986"
	I0816 22:25:10.649532   10732 host.go:66] Checking if "pause-20210816222224-6986" exists ...
	I0816 22:25:10.650748   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:10.650827   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:10.653189   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:10.653249   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:10.664888   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0816 22:25:10.665365   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:10.665893   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:25:10.665915   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:10.666315   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:10.666493   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
	I0816 22:25:10.667827   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34733
	I0816 22:25:10.668293   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:10.668762   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:25:10.668782   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:10.669202   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:10.669761   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:10.669802   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:10.670861   10732 kapi.go:59] client config for pause-20210816222224-6986: &rest.Config{Host:"https://192.168.50.226:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 22:25:10.676486   10732 addons.go:135] Setting addon default-storageclass=true in "pause-20210816222224-6986"
	W0816 22:25:10.676510   10732 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:25:10.676539   10732 host.go:66] Checking if "pause-20210816222224-6986" exists ...
	I0816 22:25:10.676985   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:10.677031   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:10.682317   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39313
	I0816 22:25:10.682805   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:10.683360   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:25:10.683382   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:10.683737   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:10.683924   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
	I0816 22:25:10.687519   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:25:10.693597   10732 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:25:10.693708   10732 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:25:10.693722   10732 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:25:10.693742   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
	I0816 22:25:10.692712   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45043
	I0816 22:25:10.694563   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:10.695082   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:25:10.695103   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:10.695455   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:10.696063   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:10.696115   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:10.700367   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:25:10.700792   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
	I0816 22:25:10.700813   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:25:10.701111   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
	I0816 22:25:10.701350   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
	I0816 22:25:10.701537   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
	I0816 22:25:10.701730   10732 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
	I0816 22:25:10.709887   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33339
	I0816 22:25:10.710304   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:10.710912   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:25:10.710938   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:10.711336   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:10.711547   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
	I0816 22:25:10.714430   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:25:10.714683   10732 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:25:10.714702   10732 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:25:10.714720   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
	I0816 22:25:10.720808   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:25:10.721319   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
	I0816 22:25:10.721342   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:25:10.721485   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
	I0816 22:25:10.721643   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
	I0816 22:25:10.721769   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
	I0816 22:25:10.721919   10732 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
	I0816 22:25:10.832212   10732 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:25:10.862755   10732 node_ready.go:35] waiting up to 6m0s for node "pause-20210816222224-6986" to be "Ready" ...
	I0816 22:25:10.863120   10732 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0816 22:25:10.867110   10732 node_ready.go:49] node "pause-20210816222224-6986" has status "Ready":"True"
	I0816 22:25:10.867130   10732 node_ready.go:38] duration metric: took 4.344058ms waiting for node "pause-20210816222224-6986" to be "Ready" ...
	I0816 22:25:10.867143   10732 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:25:10.883113   10732 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.892065   10732 pod_ready.go:92] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:10.892084   10732 pod_ready.go:81] duration metric: took 8.944517ms waiting for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.892096   10732 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.895462   10732 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:25:11.127716   10732 pod_ready.go:92] pod "etcd-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:11.127749   10732 pod_ready.go:81] duration metric: took 235.644563ms waiting for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:11.127765   10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:11.536655   10732 pod_ready.go:92] pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:11.536676   10732 pod_ready.go:81] duration metric: took 408.901449ms waiting for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:11.536690   10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:11.539596   10732 main.go:130] libmachine: Making call to close driver server
	I0816 22:25:11.539618   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
	I0816 22:25:11.539697   10732 main.go:130] libmachine: Making call to close driver server
	I0816 22:25:11.539725   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
	I0816 22:25:11.540009   10732 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:25:11.540024   10732 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:25:11.540041   10732 main.go:130] libmachine: Making call to close driver server
	I0816 22:25:11.540041   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | Closing plugin on server side
	I0816 22:25:11.540051   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
	I0816 22:25:11.540067   10732 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:25:11.540075   10732 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:25:11.540083   10732 main.go:130] libmachine: Making call to close driver server
	I0816 22:25:11.540092   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
	I0816 22:25:11.540126   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | Closing plugin on server side
	I0816 22:25:11.540298   10732 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:25:11.540310   10732 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:25:11.540320   10732 main.go:130] libmachine: Making call to close driver server
	I0816 22:25:11.540329   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
	I0816 22:25:11.540417   10732 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:25:11.540429   10732 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:25:11.540490   10732 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:25:11.540502   10732 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:25:11.542638   10732 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0816 22:25:11.542662   10732 addons.go:344] enableAddons completed in 894.875902ms
	I0816 22:25:11.931820   10732 pod_ready.go:92] pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:11.931845   10732 pod_ready.go:81] duration metric: took 395.147421ms waiting for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:11.931860   10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:12.329464   10732 pod_ready.go:92] pod "kube-proxy-7l59t" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:12.329493   10732 pod_ready.go:81] duration metric: took 397.623774ms waiting for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:12.329507   10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:12.734335   10732 pod_ready.go:92] pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:12.734360   10732 pod_ready.go:81] duration metric: took 404.844565ms waiting for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:12.734374   10732 pod_ready.go:38] duration metric: took 1.867218741s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:25:12.734394   10732 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:25:12.734439   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:12.754510   10732 api_server.go:70] duration metric: took 2.106745047s to wait for apiserver process to appear ...
	I0816 22:25:12.754540   10732 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:25:12.754553   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:25:12.792067   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 200:
	ok
	I0816 22:25:12.794542   10732 api_server.go:139] control plane version: v1.21.3
	I0816 22:25:12.794565   10732 api_server.go:129] duration metric: took 40.01886ms to wait for apiserver health ...
	I0816 22:25:12.794577   10732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:25:12.941013   10732 system_pods.go:59] 7 kube-system pods found
	I0816 22:25:12.941048   10732 system_pods.go:61] "coredns-558bd4d5db-gkxhz" [5aa76749-775e-423d-bbf9-680a20a27051] Running
	I0816 22:25:12.941053   10732 system_pods.go:61] "etcd-pause-20210816222224-6986" [f621b99e-0604-4bed-8c4e-4f5741e52f7b] Running
	I0816 22:25:12.941057   10732 system_pods.go:61] "kube-apiserver-pause-20210816222224-6986" [b1c46709-4b0b-4c9c-a701-d595a58214ba] Running
	I0816 22:25:12.941102   10732 system_pods.go:61] "kube-controller-manager-pause-20210816222224-6986" [777c035e-5f34-469a-afb5-4f8ef90ccbfb] Running
	I0816 22:25:12.941116   10732 system_pods.go:61] "kube-proxy-7l59t" [3c0e0899-31c1-477a-a6d4-2844091deea2] Running
	I0816 22:25:12.941122   10732 system_pods.go:61] "kube-scheduler-pause-20210816222224-6986" [6b32acf9-8108-45a6-901e-70cd125190f8] Running
	I0816 22:25:12.941136   10732 system_pods.go:61] "storage-provisioner" [4f138dc7-da0e-4775-b4de-b0f7d616b212] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:25:12.941158   10732 system_pods.go:74] duration metric: took 146.575596ms to wait for pod list to return data ...
	I0816 22:25:12.941176   10732 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:25:13.132349   10732 default_sa.go:45] found service account: "default"
	I0816 22:25:13.132381   10732 default_sa.go:55] duration metric: took 191.195172ms for default service account to be created ...
	I0816 22:25:13.132394   10732 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:25:13.340094   10732 system_pods.go:86] 7 kube-system pods found
	I0816 22:25:13.340135   10732 system_pods.go:89] "coredns-558bd4d5db-gkxhz" [5aa76749-775e-423d-bbf9-680a20a27051] Running
	I0816 22:25:13.340146   10732 system_pods.go:89] "etcd-pause-20210816222224-6986" [f621b99e-0604-4bed-8c4e-4f5741e52f7b] Running
	I0816 22:25:13.340155   10732 system_pods.go:89] "kube-apiserver-pause-20210816222224-6986" [b1c46709-4b0b-4c9c-a701-d595a58214ba] Running
	I0816 22:25:13.340163   10732 system_pods.go:89] "kube-controller-manager-pause-20210816222224-6986" [777c035e-5f34-469a-afb5-4f8ef90ccbfb] Running
	I0816 22:25:13.340172   10732 system_pods.go:89] "kube-proxy-7l59t" [3c0e0899-31c1-477a-a6d4-2844091deea2] Running
	I0816 22:25:13.340184   10732 system_pods.go:89] "kube-scheduler-pause-20210816222224-6986" [6b32acf9-8108-45a6-901e-70cd125190f8] Running
	I0816 22:25:13.340196   10732 system_pods.go:89] "storage-provisioner" [4f138dc7-da0e-4775-b4de-b0f7d616b212] Running
	I0816 22:25:13.340210   10732 system_pods.go:126] duration metric: took 207.809217ms to wait for k8s-apps to be running ...
	I0816 22:25:13.340225   10732 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:25:13.340279   10732 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:25:13.358716   10732 system_svc.go:56] duration metric: took 18.47804ms WaitForService to wait for kubelet.
	I0816 22:25:13.358752   10732 kubeadm.go:547] duration metric: took 2.710991068s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:25:13.358785   10732 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:25:13.536797   10732 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:25:13.536830   10732 node_conditions.go:123] node cpu capacity is 2
	I0816 22:25:13.536848   10732 node_conditions.go:105] duration metric: took 178.056493ms to run NodePressure ...
	I0816 22:25:13.536863   10732 start.go:231] waiting for startup goroutines ...
	I0816 22:25:13.602415   10732 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:25:13.604425   10732 out.go:177] * Done! kubectl is now configured to use "pause-20210816222224-6986" cluster and "default" namespace by default

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816222224-6986 -n pause-20210816222224-6986
helpers_test.go:245: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210816222224-6986 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210816222224-6986 logs -n 25: exit status 110 (2.653220078s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                  |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | multinode-20210816215441-6986          | multinode-20210816215441-6986          | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:08:07 UTC | Mon, 16 Aug 2021 22:11:11 UTC |
	|         | stop                                   |                                        |         |         |                               |                               |
	| start   | -p                                     | multinode-20210816215441-6986          | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:11:11 UTC | Mon, 16 Aug 2021 22:15:19 UTC |
	|         | multinode-20210816215441-6986          |                                        |         |         |                               |                               |
	|         | --wait=true -v=8                       |                                        |         |         |                               |                               |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | --driver=kvm2                          |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| start   | -p                                     | multinode-20210816215441-6986-m03      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:19 UTC | Mon, 16 Aug 2021 22:16:20 UTC |
	|         | multinode-20210816215441-6986-m03      |                                        |         |         |                               |                               |
	|         | --driver=kvm2                          |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| delete  | -p                                     | multinode-20210816215441-6986-m03      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:16:20 UTC | Mon, 16 Aug 2021 22:16:21 UTC |
	|         | multinode-20210816215441-6986-m03      |                                        |         |         |                               |                               |
	| delete  | -p                                     | multinode-20210816215441-6986          | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:16:21 UTC | Mon, 16 Aug 2021 22:16:23 UTC |
	|         | multinode-20210816215441-6986          |                                        |         |         |                               |                               |
	| start   | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:18:07 UTC | Mon, 16 Aug 2021 22:19:45 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | --wait=true --preload=false            |                                        |         |         |                               |                               |
	|         | --driver=kvm2                          |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0           |                                        |         |         |                               |                               |
	| ssh     | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:45 UTC | Mon, 16 Aug 2021 22:19:47 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | -- sudo crictl pull busybox            |                                        |         |         |                               |                               |
	| start   | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:48 UTC | Mon, 16 Aug 2021 22:20:39 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=kvm2         |                                        |         |         |                               |                               |
	|         |  --container-runtime=containerd        |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3           |                                        |         |         |                               |                               |
	| ssh     | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:39 UTC | Mon, 16 Aug 2021 22:20:39 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | -- sudo crictl image ls                |                                        |         |         |                               |                               |
	| delete  | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:39 UTC | Mon, 16 Aug 2021 22:20:40 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	| start   | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:40 UTC | Mon, 16 Aug 2021 22:21:45 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	|         | --memory=2048 --driver=kvm2            |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| stop    | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:46 UTC | Mon, 16 Aug 2021 22:21:46 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	|         | --cancel-scheduled                     |                                        |         |         |                               |                               |
	| stop    | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:58 UTC | Mon, 16 Aug 2021 22:22:05 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	|         | --schedule 5s                          |                                        |         |         |                               |                               |
	| delete  | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:23 UTC | Mon, 16 Aug 2021 22:22:24 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	| delete  | -p kubenet-20210816222224-6986         | kubenet-20210816222224-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:22:25 UTC |
	| delete  | -p false-20210816222225-6986           | false-20210816222225-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:25 UTC | Mon, 16 Aug 2021 22:22:25 UTC |
	| start   | -p                                     | force-systemd-env-20210816222224-6986  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:24:04 UTC |
	|         | force-systemd-env-20210816222224-6986  |                                        |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | -v=5 --driver=kvm2                     |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| -p      | force-systemd-env-20210816222224-6986  | force-systemd-env-20210816222224-6986  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:04 UTC | Mon, 16 Aug 2021 22:24:04 UTC |
	|         | ssh cat /etc/containerd/config.toml    |                                        |         |         |                               |                               |
	| delete  | -p                                     | force-systemd-env-20210816222224-6986  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:04 UTC | Mon, 16 Aug 2021 22:24:05 UTC |
	|         | force-systemd-env-20210816222224-6986  |                                        |         |         |                               |                               |
	| start   | -p pause-20210816222224-6986           | pause-20210816222224-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:24:28 UTC |
	|         | --memory=2048                          |                                        |         |         |                               |                               |
	|         | --install-addons=false                 |                                        |         |         |                               |                               |
	|         | --wait=all --driver=kvm2               |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| start   | -p                                     | kubernetes-upgrade-20210816222225-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:25 UTC | Mon, 16 Aug 2021 22:24:48 UTC |
	|         | kubernetes-upgrade-20210816222225-6986 |                                        |         |         |                               |                               |
	|         | --memory=2200                          |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0           |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=kvm2   |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| stop    | -p                                     | kubernetes-upgrade-20210816222225-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:49 UTC | Mon, 16 Aug 2021 22:24:53 UTC |
	|         | kubernetes-upgrade-20210816222225-6986 |                                        |         |         |                               |                               |
	| start   | -p                                     | offline-containerd-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:25:02 UTC |
	|         | offline-containerd-20210816222224-6986 |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048   |                                        |         |         |                               |                               |
	|         | --wait=true --driver=kvm2              |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| delete  | -p                                     | offline-containerd-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:02 UTC | Mon, 16 Aug 2021 22:25:03 UTC |
	|         | offline-containerd-20210816222224-6986 |                                        |         |         |                               |                               |
	| start   | -p pause-20210816222224-6986           | pause-20210816222224-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:28 UTC | Mon, 16 Aug 2021 22:25:13 UTC |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                     |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:24:54
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:24:54.079177   10879 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:24:54.079273   10879 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:24:54.079278   10879 out.go:311] Setting ErrFile to fd 2...
	I0816 22:24:54.079280   10879 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:24:54.079426   10879 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:24:54.079721   10879 out.go:305] Setting JSON to false
	I0816 22:24:54.187099   10879 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":4056,"bootTime":1629148638,"procs":185,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:24:54.187527   10879 start.go:121] virtualization: kvm guest
	I0816 22:24:54.190315   10879 out.go:177] * [kubernetes-upgrade-20210816222225-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:24:54.192235   10879 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:24:54.190469   10879 notify.go:169] Checking for updates...
	I0816 22:24:54.193922   10879 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:24:54.195578   10879 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:24:54.197163   10879 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:24:54.197582   10879 config.go:177] Loaded profile config "kubernetes-upgrade-20210816222225-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0816 22:24:54.197998   10879 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:24:54.198058   10879 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:24:54.215228   10879 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45677
	I0816 22:24:54.215770   10879 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:24:54.216328   10879 main.go:130] libmachine: Using API Version  1
	I0816 22:24:54.216350   10879 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:24:54.216734   10879 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:24:54.216908   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:24:54.217075   10879 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:24:54.217475   10879 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:24:54.217512   10879 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:24:54.229224   10879 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34399
	I0816 22:24:54.229593   10879 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:24:54.230067   10879 main.go:130] libmachine: Using API Version  1
	I0816 22:24:54.230093   10879 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:24:54.230460   10879 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:24:54.230643   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:24:54.279869   10879 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 22:24:54.279899   10879 start.go:278] selected driver: kvm2
	I0816 22:24:54.279906   10879 start.go:751] validating driver "kvm2" against &{Name:kubernetes-upgrade-20210816222225-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210816222225-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.116.91 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:24:54.280014   10879 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 22:24:54.281335   10879 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:24:54.282098   10879 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 22:24:54.294712   10879 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0816 22:24:54.295176   10879 cni.go:93] Creating CNI manager for ""
	I0816 22:24:54.295202   10879 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:24:54.295212   10879 start_flags.go:277] config:
	{Name:kubernetes-upgrade-20210816222225-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210816222225-
6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.116.91 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:24:54.295364   10879 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:24:54.297417   10879 out.go:177] * Starting control plane node kubernetes-upgrade-20210816222225-6986 in cluster kubernetes-upgrade-20210816222225-6986
	I0816 22:24:54.297445   10879 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 22:24:54.297484   10879 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0816 22:24:54.297505   10879 cache.go:56] Caching tarball of preloaded images
	I0816 22:24:54.297634   10879 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 22:24:54.297656   10879 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on containerd
	I0816 22:24:54.297784   10879 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/config.json ...
	I0816 22:24:54.297977   10879 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:24:54.298007   10879 start.go:313] acquiring machines lock for kubernetes-upgrade-20210816222225-6986: {Name:mk808edd60d1305a42bb85791729eff4573dbb15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 22:24:54.298081   10879 start.go:317] acquired machines lock for "kubernetes-upgrade-20210816222225-6986" in 55.05µs
	I0816 22:24:54.298103   10879 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:24:54.298109   10879 fix.go:55] fixHost starting: 
	I0816 22:24:54.298510   10879 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:24:54.298561   10879 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:24:54.309226   10879 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33255
	I0816 22:24:54.309690   10879 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:24:54.310211   10879 main.go:130] libmachine: Using API Version  1
	I0816 22:24:54.310242   10879 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:24:54.310587   10879 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:24:54.310840   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:24:54.310996   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetState
	I0816 22:24:54.314433   10879 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20210816222225-6986: state=Stopped err=<nil>
	I0816 22:24:54.314482   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	W0816 22:24:54.314626   10879 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:24:52.760695    9171 pod_ready.go:102] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:53.612575   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:24:53.632520   10732 api_server.go:70] duration metric: took 7.033030474s to wait for apiserver process to appear ...
	I0816 22:24:53.632561   10732 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:24:53.632570   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:24:53.633109   10732 api_server.go:255] stopped: https://192.168.50.226:8443/healthz: Get "https://192.168.50.226:8443/healthz": dial tcp 192.168.50.226:8443: connect: connection refused
	I0816 22:24:54.133848   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:24:54.316518   10879 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-20210816222225-6986" ...
	I0816 22:24:54.316550   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .Start
	I0816 22:24:54.316716   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Ensuring networks are active...
	I0816 22:24:54.318718   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Ensuring network default is active
	I0816 22:24:54.319156   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Ensuring network mk-kubernetes-upgrade-20210816222225-6986 is active
	I0816 22:24:54.319641   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Getting domain xml...
	I0816 22:24:54.321602   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Creating domain...
	I0816 22:24:54.783576   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Waiting to get IP...
	I0816 22:24:54.784705   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:24:54.785273   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has current primary IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:24:54.785327   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Found IP for machine: 192.168.116.91
	I0816 22:24:54.785348   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Reserving static IP address...
	I0816 22:24:54.785810   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-20210816222225-6986", mac: "52:54:00:92:67:21", ip: "192.168.116.91"} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:23:40 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:24:54.785842   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Reserved static IP address: 192.168.116.91
	I0816 22:24:54.785867   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | skip adding static IP to network mk-kubernetes-upgrade-20210816222225-6986 - found existing host DHCP lease matching {name: "kubernetes-upgrade-20210816222225-6986", mac: "52:54:00:92:67:21", ip: "192.168.116.91"}
	I0816 22:24:54.785897   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | Getting to WaitForSSH function...
	I0816 22:24:54.785911   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Waiting for SSH to be available...
	I0816 22:24:54.791673   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:24:54.792070   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:23:40 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:24:54.792097   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:24:54.792320   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | Using SSH client type: external
	I0816 22:24:54.792359   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa (-rw-------)
	I0816 22:24:54.792401   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.116.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:24:54.792424   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | About to run SSH command:
	I0816 22:24:54.792441   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | exit 0
	I0816 22:24:55.186584    9171 pod_ready.go:102] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:57.682612    9171 pod_ready.go:102] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:59.683949    9171 pod_ready.go:102] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:59.090396   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:24:59.090431   10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:24:59.133677   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:24:59.161347   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:24:59.161378   10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:24:59.633911   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:24:59.639524   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:24:59.639548   10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:25:00.133775   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:25:00.151749   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:25:00.151784   10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:25:00.633968   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:25:00.646578   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 200:
	ok
	I0816 22:25:00.661937   10732 api_server.go:139] control plane version: v1.21.3
	I0816 22:25:00.661961   10732 api_server.go:129] duration metric: took 7.029396002s to wait for apiserver health ...
	I0816 22:25:00.661972   10732 cni.go:93] Creating CNI manager for ""
	I0816 22:25:00.661979   10732 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:25:01.185512    9171 pod_ready.go:92] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:01.185545    9171 pod_ready.go:81] duration metric: took 23.534022707s waiting for pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.185559    9171 pod_ready.go:78] waiting up to 6m0s for pod "etcd-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.215463    9171 pod_ready.go:92] pod "etcd-offline-containerd-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:01.215489    9171 pod_ready.go:81] duration metric: took 29.921986ms waiting for pod "etcd-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.215503    9171 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.230267    9171 pod_ready.go:92] pod "kube-apiserver-offline-containerd-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:01.230289    9171 pod_ready.go:81] duration metric: took 14.776227ms waiting for pod "kube-apiserver-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.230302    9171 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.241691    9171 pod_ready.go:92] pod "kube-controller-manager-offline-containerd-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:01.241717    9171 pod_ready.go:81] duration metric: took 11.405045ms waiting for pod "kube-controller-manager-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.241733    9171 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dhhrk" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.251986    9171 pod_ready.go:92] pod "kube-proxy-dhhrk" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:01.252017    9171 pod_ready.go:81] duration metric: took 10.275945ms waiting for pod "kube-proxy-dhhrk" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.252030    9171 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.580001    9171 pod_ready.go:92] pod "kube-scheduler-offline-containerd-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:01.580033    9171 pod_ready.go:81] duration metric: took 327.992243ms waiting for pod "kube-scheduler-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.580046    9171 pod_ready.go:38] duration metric: took 36.483444375s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:25:01.580071    9171 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:25:01.580124    9171 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:01.597074    9171 api_server.go:70] duration metric: took 36.950719971s to wait for apiserver process to appear ...
	I0816 22:25:01.597104    9171 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:25:01.597117    9171 api_server.go:239] Checking apiserver healthz at https://192.168.105.22:8443/healthz ...
	I0816 22:25:01.604325    9171 api_server.go:265] https://192.168.105.22:8443/healthz returned 200:
	ok
	I0816 22:25:01.606279    9171 api_server.go:139] control plane version: v1.21.3
	I0816 22:25:01.606301    9171 api_server.go:129] duration metric: took 9.189625ms to wait for apiserver health ...
	I0816 22:25:01.606312    9171 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:25:01.788694    9171 system_pods.go:59] 7 kube-system pods found
	I0816 22:25:01.788767    9171 system_pods.go:61] "coredns-558bd4d5db-jrjhw" [acdb9f4c-484e-4e02-97c3-368ce130507e] Running
	I0816 22:25:01.788794    9171 system_pods.go:61] "etcd-offline-containerd-20210816222224-6986" [5cab4619-a033-47c0-9009-225ece0f2892] Running
	I0816 22:25:01.788801    9171 system_pods.go:61] "kube-apiserver-offline-containerd-20210816222224-6986" [ea1abce8-a6d2-4e57-81c9-97bdd5eefea4] Running
	I0816 22:25:01.788808    9171 system_pods.go:61] "kube-controller-manager-offline-containerd-20210816222224-6986" [9e75aa0c-4fd9-4812-9163-c6c1a26c9f2e] Running
	I0816 22:25:01.788813    9171 system_pods.go:61] "kube-proxy-dhhrk" [a48ab7f9-7dfc-47de-8aca-c172bea7ff31] Running
	I0816 22:25:01.788819    9171 system_pods.go:61] "kube-scheduler-offline-containerd-20210816222224-6986" [3dd47537-37cc-49f2-a469-8ef39825ba4a] Running
	I0816 22:25:01.788827    9171 system_pods.go:61] "storage-provisioner" [e6290b9f-d87d-488d-8f9e-7cbbc59d9585] Running
	I0816 22:25:01.788835    9171 system_pods.go:74] duration metric: took 182.517591ms to wait for pod list to return data ...
	I0816 22:25:01.788850    9171 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:25:01.981356    9171 default_sa.go:45] found service account: "default"
	I0816 22:25:01.981387    9171 default_sa.go:55] duration metric: took 192.530827ms for default service account to be created ...
	I0816 22:25:01.981399    9171 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:25:02.190487    9171 system_pods.go:86] 7 kube-system pods found
	I0816 22:25:02.190528    9171 system_pods.go:89] "coredns-558bd4d5db-jrjhw" [acdb9f4c-484e-4e02-97c3-368ce130507e] Running
	I0816 22:25:02.190538    9171 system_pods.go:89] "etcd-offline-containerd-20210816222224-6986" [5cab4619-a033-47c0-9009-225ece0f2892] Running
	I0816 22:25:02.190546    9171 system_pods.go:89] "kube-apiserver-offline-containerd-20210816222224-6986" [ea1abce8-a6d2-4e57-81c9-97bdd5eefea4] Running
	I0816 22:25:02.190554    9171 system_pods.go:89] "kube-controller-manager-offline-containerd-20210816222224-6986" [9e75aa0c-4fd9-4812-9163-c6c1a26c9f2e] Running
	I0816 22:25:02.190560    9171 system_pods.go:89] "kube-proxy-dhhrk" [a48ab7f9-7dfc-47de-8aca-c172bea7ff31] Running
	I0816 22:25:02.190567    9171 system_pods.go:89] "kube-scheduler-offline-containerd-20210816222224-6986" [3dd47537-37cc-49f2-a469-8ef39825ba4a] Running
	I0816 22:25:02.190573    9171 system_pods.go:89] "storage-provisioner" [e6290b9f-d87d-488d-8f9e-7cbbc59d9585] Running
	I0816 22:25:02.190582    9171 system_pods.go:126] duration metric: took 209.176198ms to wait for k8s-apps to be running ...
	I0816 22:25:02.190596    9171 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:25:02.190648    9171 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:25:02.207959    9171 system_svc.go:56] duration metric: took 17.354686ms WaitForService to wait for kubelet.
	I0816 22:25:02.207991    9171 kubeadm.go:547] duration metric: took 37.56164237s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:25:02.208036    9171 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:25:02.385401    9171 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:25:02.385432    9171 node_conditions.go:123] node cpu capacity is 2
	I0816 22:25:02.385444    9171 node_conditions.go:105] duration metric: took 177.399541ms to run NodePressure ...
	I0816 22:25:02.385455    9171 start.go:231] waiting for startup goroutines ...
	I0816 22:25:02.438114    9171 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:25:02.440691    9171 out.go:177] * Done! kubectl is now configured to use "offline-containerd-20210816222224-6986" cluster and "default" namespace by default
	I0816 22:25:00.663954   10732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:25:00.664005   10732 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:25:00.674379   10732 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:25:00.699896   10732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:25:00.718704   10732 system_pods.go:59] 6 kube-system pods found
	I0816 22:25:00.718763   10732 system_pods.go:61] "coredns-558bd4d5db-gkxhz" [5aa76749-775e-423d-bbf9-680a20a27051] Running
	I0816 22:25:00.718780   10732 system_pods.go:61] "etcd-pause-20210816222224-6986" [f621b99e-0604-4bed-8c4e-4f5741e52f7b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 22:25:00.718802   10732 system_pods.go:61] "kube-apiserver-pause-20210816222224-6986" [b1c46709-4b0b-4c9c-a701-d595a58214ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 22:25:00.718811   10732 system_pods.go:61] "kube-controller-manager-pause-20210816222224-6986" [777c035e-5f34-469a-afb5-4f8ef90ccbfb] Running
	I0816 22:25:00.718819   10732 system_pods.go:61] "kube-proxy-7l59t" [3c0e0899-31c1-477a-a6d4-2844091deea2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:25:00.718830   10732 system_pods.go:61] "kube-scheduler-pause-20210816222224-6986" [6b32acf9-8108-45a6-901e-70cd125190f8] Running
	I0816 22:25:00.718838   10732 system_pods.go:74] duration metric: took 18.921493ms to wait for pod list to return data ...
	I0816 22:25:00.718847   10732 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:25:00.723789   10732 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:25:00.723820   10732 node_conditions.go:123] node cpu capacity is 2
	I0816 22:25:00.723836   10732 node_conditions.go:105] duration metric: took 4.978152ms to run NodePressure ...
	I0816 22:25:00.723854   10732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:01.396623   10732 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:25:01.403109   10732 kubeadm.go:746] kubelet initialised
	I0816 22:25:01.403139   10732 kubeadm.go:747] duration metric: took 6.492031ms waiting for restarted kubelet to initialise ...
	I0816 22:25:01.403151   10732 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:25:01.409386   10732 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:03.432924   10732 pod_ready.go:102] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:05.435685   10732 pod_ready.go:102] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:05.951433   10732 pod_ready.go:92] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:05.951457   10732 pod_ready.go:81] duration metric: took 4.542029801s waiting for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:05.951470   10732 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:06.969870   10732 pod_ready.go:92] pod "etcd-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:06.969903   10732 pod_ready.go:81] duration metric: took 1.018424787s waiting for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:06.969918   10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:06.978963   10732 pod_ready.go:92] pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:06.978984   10732 pod_ready.go:81] duration metric: took 9.058114ms waiting for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:06.978997   10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:07.986911   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | SSH cmd err, output: <nil>: 
	I0816 22:25:07.987289   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetConfigRaw
	I0816 22:25:07.988117   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetIP
	I0816 22:25:07.993471   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:07.993933   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:07.993970   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:07.994335   10879 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/config.json ...
	I0816 22:25:07.994547   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:07.994761   10879 machine.go:88] provisioning docker machine ...
	I0816 22:25:07.994788   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:07.994976   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetMachineName
	I0816 22:25:07.995114   10879 buildroot.go:166] provisioning hostname "kubernetes-upgrade-20210816222225-6986"
	I0816 22:25:07.995139   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetMachineName
	I0816 22:25:07.995291   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.000173   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.000497   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.000524   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.000680   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
	I0816 22:25:08.000825   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.000965   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.001081   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
	I0816 22:25:08.001235   10879 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:08.001401   10879 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.116.91 22 <nil> <nil>}
	I0816 22:25:08.001421   10879 main.go:130] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20210816222225-6986 && echo "kubernetes-upgrade-20210816222225-6986" | sudo tee /etc/hostname
	I0816 22:25:08.156978   10879 main.go:130] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20210816222225-6986
	
	I0816 22:25:08.157018   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.162417   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.162702   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.162735   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.162864   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
	I0816 22:25:08.163064   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.163277   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.163406   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
	I0816 22:25:08.163558   10879 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:08.163733   10879 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.116.91 22 <nil> <nil>}
	I0816 22:25:08.163761   10879 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20210816222225-6986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20210816222225-6986/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20210816222225-6986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:25:08.307005   10879 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:25:08.307035   10879 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:25:08.307053   10879 buildroot.go:174] setting up certificates
	I0816 22:25:08.307064   10879 provision.go:83] configureAuth start
	I0816 22:25:08.307075   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetMachineName
	I0816 22:25:08.307332   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetIP
	I0816 22:25:08.313331   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.313697   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.313729   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.313896   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.318531   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.318844   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.318878   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.318990   10879 provision.go:138] copyHostCerts
	I0816 22:25:08.319059   10879 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:25:08.319073   10879 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:25:08.319128   10879 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:25:08.319254   10879 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:25:08.319268   10879 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:25:08.319294   10879 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:25:08.319359   10879 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:25:08.319368   10879 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:25:08.319397   10879 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
	I0816 22:25:08.319465   10879 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20210816222225-6986 san=[192.168.116.91 192.168.116.91 localhost 127.0.0.1 minikube kubernetes-upgrade-20210816222225-6986]
	I0816 22:25:08.473458   10879 provision.go:172] copyRemoteCerts
	I0816 22:25:08.473513   10879 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:25:08.473535   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.478720   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.479123   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.479157   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.479301   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
	I0816 22:25:08.479517   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.479669   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
	I0816 22:25:08.479802   10879 sshutil.go:53] new ssh client: &{IP:192.168.116.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa Username:docker}
	I0816 22:25:08.575404   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:25:08.593200   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
	I0816 22:25:08.611874   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:25:08.631651   10879 provision.go:86] duration metric: configureAuth took 324.57656ms
	I0816 22:25:08.631679   10879 buildroot.go:189] setting minikube options for container-runtime
	I0816 22:25:08.631847   10879 config.go:177] Loaded profile config "kubernetes-upgrade-20210816222225-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0816 22:25:08.631862   10879 machine.go:91] provisioned docker machine in 637.081285ms
	I0816 22:25:08.631877   10879 start.go:267] post-start starting for "kubernetes-upgrade-20210816222225-6986" (driver="kvm2")
	I0816 22:25:08.631885   10879 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:25:08.631905   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:08.632222   10879 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:25:08.632262   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.638223   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.638599   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.638628   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.638804   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
	I0816 22:25:08.639025   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.639186   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
	I0816 22:25:08.639324   10879 sshutil.go:53] new ssh client: &{IP:192.168.116.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa Username:docker}
	I0816 22:25:08.731490   10879 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:25:08.736384   10879 info.go:137] Remote host: Buildroot 2020.02.12
	I0816 22:25:08.736415   10879 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:25:08.736479   10879 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:25:08.736640   10879 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0816 22:25:08.736796   10879 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:25:08.744563   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:25:08.762219   10879 start.go:270] post-start completed in 130.327769ms
	I0816 22:25:08.762269   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:08.762532   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.768066   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.768447   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.768479   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.768580   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
	I0816 22:25:08.768764   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.768937   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.769097   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
	I0816 22:25:08.769278   10879 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:08.769412   10879 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.116.91 22 <nil> <nil>}
	I0816 22:25:08.769423   10879 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0816 22:25:08.908369   10879 main.go:130] libmachine: SSH cmd err, output: <nil>: 1629152708.857933809
	
	I0816 22:25:08.908397   10879 fix.go:212] guest clock: 1629152708.857933809
	I0816 22:25:08.908407   10879 fix.go:225] Guest: 2021-08-16 22:25:08.857933809 +0000 UTC Remote: 2021-08-16 22:25:08.762514681 +0000 UTC m=+14.743694760 (delta=95.419128ms)
	I0816 22:25:08.908465   10879 fix.go:196] guest clock delta is within tolerance: 95.419128ms
	I0816 22:25:08.908473   10879 fix.go:57] fixHost completed within 14.610364111s
	I0816 22:25:08.908483   10879 start.go:80] releasing machines lock for "kubernetes-upgrade-20210816222225-6986", held for 14.610387547s
	I0816 22:25:08.908527   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:08.908801   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetIP
	I0816 22:25:08.914888   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.915258   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.915290   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.915507   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:08.915732   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:08.916309   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:08.916592   10879 ssh_runner.go:149] Run: systemctl --version
	I0816 22:25:08.916617   10879 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:25:08.916626   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.916658   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.923331   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.923688   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.923714   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.923808   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
	I0816 22:25:08.923961   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.924114   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
	I0816 22:25:08.924243   10879 sshutil.go:53] new ssh client: &{IP:192.168.116.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa Username:docker}
	I0816 22:25:08.924528   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.924867   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.924898   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.925049   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
	I0816 22:25:08.925209   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.925407   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
	I0816 22:25:08.925534   10879 sshutil.go:53] new ssh client: &{IP:192.168.116.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa Username:docker}
	I0816 22:25:09.022865   10879 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 22:25:09.023038   10879 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:09.000201   10732 pod_ready.go:102] pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:10.499577   10732 pod_ready.go:92] pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:10.499613   10732 pod_ready.go:81] duration metric: took 3.520603411s waiting for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.499631   10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.508715   10732 pod_ready.go:92] pod "kube-proxy-7l59t" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:10.508738   10732 pod_ready.go:81] duration metric: took 9.098529ms waiting for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.508749   10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.514516   10732 pod_ready.go:92] pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:10.514536   10732 pod_ready.go:81] duration metric: took 5.779042ms waiting for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.514546   10732 pod_ready.go:38] duration metric: took 9.111379533s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:25:10.514567   10732 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:25:10.530219   10732 ops.go:34] apiserver oom_adj: -16
	I0816 22:25:10.530242   10732 kubeadm.go:604] restartCluster took 31.19958524s
	I0816 22:25:10.530251   10732 kubeadm.go:392] StartCluster complete in 31.557512009s
	I0816 22:25:10.530271   10732 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:25:10.530404   10732 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:10.531238   10732 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:25:10.532000   10732 kapi.go:59] client config for pause-20210816222224-6986: &rest.Config{Host:"https://192.168.50.226:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 22:25:10.647656   10732 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210816222224-6986" rescaled to 1
	I0816 22:25:10.647728   10732 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.50.226 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:25:10.647757   10732 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:25:10.647794   10732 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0816 22:25:10.649327   10732 out.go:177] * Verifying Kubernetes components...
	I0816 22:25:10.649398   10732 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:25:10.647852   10732 addons.go:59] Setting storage-provisioner=true in profile "pause-20210816222224-6986"
	I0816 22:25:10.647862   10732 addons.go:59] Setting default-storageclass=true in profile "pause-20210816222224-6986"
	I0816 22:25:10.647991   10732 config.go:177] Loaded profile config "pause-20210816222224-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:25:10.649480   10732 addons.go:135] Setting addon storage-provisioner=true in "pause-20210816222224-6986"
	W0816 22:25:10.649500   10732 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:25:10.649516   10732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210816222224-6986"
	I0816 22:25:10.649532   10732 host.go:66] Checking if "pause-20210816222224-6986" exists ...
	I0816 22:25:10.650748   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:10.650827   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:10.653189   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:10.653249   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:10.664888   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0816 22:25:10.665365   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:10.665893   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:25:10.665915   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:10.666315   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:10.666493   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
	I0816 22:25:10.667827   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34733
	I0816 22:25:10.668293   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:10.668762   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:25:10.668782   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:10.669202   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:10.669761   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:10.669802   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:10.670861   10732 kapi.go:59] client config for pause-20210816222224-6986: &rest.Config{Host:"https://192.168.50.226:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 22:25:10.676486   10732 addons.go:135] Setting addon default-storageclass=true in "pause-20210816222224-6986"
	W0816 22:25:10.676510   10732 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:25:10.676539   10732 host.go:66] Checking if "pause-20210816222224-6986" exists ...
	I0816 22:25:10.676985   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:10.677031   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:10.682317   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39313
	I0816 22:25:10.682805   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:10.683360   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:25:10.683382   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:10.683737   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:10.683924   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
	I0816 22:25:10.687519   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:25:10.693597   10732 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:25:10.693708   10732 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:25:10.693722   10732 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:25:10.693742   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
	I0816 22:25:10.692712   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45043
	I0816 22:25:10.694563   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:10.695082   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:25:10.695103   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:10.695455   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:10.696063   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:10.696115   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:10.700367   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:25:10.700792   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
	I0816 22:25:10.700813   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:25:10.701111   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
	I0816 22:25:10.701350   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
	I0816 22:25:10.701537   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
	I0816 22:25:10.701730   10732 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
	I0816 22:25:10.709887   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33339
	I0816 22:25:10.710304   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:10.710912   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:25:10.710938   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:10.711336   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:10.711547   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
	I0816 22:25:10.714430   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:25:10.714683   10732 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:25:10.714702   10732 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:25:10.714720   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
	I0816 22:25:10.720808   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:25:10.721319   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
	I0816 22:25:10.721342   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:25:10.721485   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
	I0816 22:25:10.721643   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
	I0816 22:25:10.721769   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
	I0816 22:25:10.721919   10732 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
	I0816 22:25:10.832212   10732 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:25:10.862755   10732 node_ready.go:35] waiting up to 6m0s for node "pause-20210816222224-6986" to be "Ready" ...
	I0816 22:25:10.863120   10732 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0816 22:25:10.867110   10732 node_ready.go:49] node "pause-20210816222224-6986" has status "Ready":"True"
	I0816 22:25:10.867130   10732 node_ready.go:38] duration metric: took 4.344058ms waiting for node "pause-20210816222224-6986" to be "Ready" ...
	I0816 22:25:10.867143   10732 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:25:10.883113   10732 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.892065   10732 pod_ready.go:92] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:10.892084   10732 pod_ready.go:81] duration metric: took 8.944517ms waiting for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.892096   10732 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.895462   10732 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:25:11.127716   10732 pod_ready.go:92] pod "etcd-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:11.127749   10732 pod_ready.go:81] duration metric: took 235.644563ms waiting for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:11.127765   10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:11.536655   10732 pod_ready.go:92] pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:11.536676   10732 pod_ready.go:81] duration metric: took 408.901449ms waiting for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:11.536690   10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:11.539596   10732 main.go:130] libmachine: Making call to close driver server
	I0816 22:25:11.539618   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
	I0816 22:25:11.539697   10732 main.go:130] libmachine: Making call to close driver server
	I0816 22:25:11.539725   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
	I0816 22:25:11.540009   10732 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:25:11.540024   10732 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:25:11.540041   10732 main.go:130] libmachine: Making call to close driver server
	I0816 22:25:11.540041   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | Closing plugin on server side
	I0816 22:25:11.540051   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
	I0816 22:25:11.540067   10732 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:25:11.540075   10732 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:25:11.540083   10732 main.go:130] libmachine: Making call to close driver server
	I0816 22:25:11.540092   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
	I0816 22:25:11.540126   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | Closing plugin on server side
	I0816 22:25:11.540298   10732 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:25:11.540310   10732 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:25:11.540320   10732 main.go:130] libmachine: Making call to close driver server
	I0816 22:25:11.540329   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
	I0816 22:25:11.540417   10732 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:25:11.540429   10732 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:25:11.540490   10732 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:25:11.540502   10732 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:25:11.542638   10732 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0816 22:25:11.542662   10732 addons.go:344] enableAddons completed in 894.875902ms
	I0816 22:25:11.931820   10732 pod_ready.go:92] pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:11.931845   10732 pod_ready.go:81] duration metric: took 395.147421ms waiting for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:11.931860   10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:12.329464   10732 pod_ready.go:92] pod "kube-proxy-7l59t" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:12.329493   10732 pod_ready.go:81] duration metric: took 397.623774ms waiting for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:12.329507   10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:12.734335   10732 pod_ready.go:92] pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:12.734360   10732 pod_ready.go:81] duration metric: took 404.844565ms waiting for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:12.734374   10732 pod_ready.go:38] duration metric: took 1.867218741s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:25:12.734394   10732 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:25:12.734439   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:12.754510   10732 api_server.go:70] duration metric: took 2.106745047s to wait for apiserver process to appear ...
	I0816 22:25:12.754540   10732 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:25:12.754553   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:25:12.792067   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 200:
	ok
	I0816 22:25:12.794542   10732 api_server.go:139] control plane version: v1.21.3
	I0816 22:25:12.794565   10732 api_server.go:129] duration metric: took 40.01886ms to wait for apiserver health ...
	I0816 22:25:12.794577   10732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:25:12.941013   10732 system_pods.go:59] 7 kube-system pods found
	I0816 22:25:12.941048   10732 system_pods.go:61] "coredns-558bd4d5db-gkxhz" [5aa76749-775e-423d-bbf9-680a20a27051] Running
	I0816 22:25:12.941053   10732 system_pods.go:61] "etcd-pause-20210816222224-6986" [f621b99e-0604-4bed-8c4e-4f5741e52f7b] Running
	I0816 22:25:12.941057   10732 system_pods.go:61] "kube-apiserver-pause-20210816222224-6986" [b1c46709-4b0b-4c9c-a701-d595a58214ba] Running
	I0816 22:25:12.941102   10732 system_pods.go:61] "kube-controller-manager-pause-20210816222224-6986" [777c035e-5f34-469a-afb5-4f8ef90ccbfb] Running
	I0816 22:25:12.941116   10732 system_pods.go:61] "kube-proxy-7l59t" [3c0e0899-31c1-477a-a6d4-2844091deea2] Running
	I0816 22:25:12.941122   10732 system_pods.go:61] "kube-scheduler-pause-20210816222224-6986" [6b32acf9-8108-45a6-901e-70cd125190f8] Running
	I0816 22:25:12.941136   10732 system_pods.go:61] "storage-provisioner" [4f138dc7-da0e-4775-b4de-b0f7d616b212] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:25:12.941158   10732 system_pods.go:74] duration metric: took 146.575596ms to wait for pod list to return data ...
	I0816 22:25:12.941176   10732 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:25:13.132349   10732 default_sa.go:45] found service account: "default"
	I0816 22:25:13.132381   10732 default_sa.go:55] duration metric: took 191.195172ms for default service account to be created ...
	I0816 22:25:13.132394   10732 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:25:13.340094   10732 system_pods.go:86] 7 kube-system pods found
	I0816 22:25:13.340135   10732 system_pods.go:89] "coredns-558bd4d5db-gkxhz" [5aa76749-775e-423d-bbf9-680a20a27051] Running
	I0816 22:25:13.340146   10732 system_pods.go:89] "etcd-pause-20210816222224-6986" [f621b99e-0604-4bed-8c4e-4f5741e52f7b] Running
	I0816 22:25:13.340155   10732 system_pods.go:89] "kube-apiserver-pause-20210816222224-6986" [b1c46709-4b0b-4c9c-a701-d595a58214ba] Running
	I0816 22:25:13.340163   10732 system_pods.go:89] "kube-controller-manager-pause-20210816222224-6986" [777c035e-5f34-469a-afb5-4f8ef90ccbfb] Running
	I0816 22:25:13.340172   10732 system_pods.go:89] "kube-proxy-7l59t" [3c0e0899-31c1-477a-a6d4-2844091deea2] Running
	I0816 22:25:13.340184   10732 system_pods.go:89] "kube-scheduler-pause-20210816222224-6986" [6b32acf9-8108-45a6-901e-70cd125190f8] Running
	I0816 22:25:13.340196   10732 system_pods.go:89] "storage-provisioner" [4f138dc7-da0e-4775-b4de-b0f7d616b212] Running
	I0816 22:25:13.340210   10732 system_pods.go:126] duration metric: took 207.809217ms to wait for k8s-apps to be running ...
	I0816 22:25:13.340225   10732 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:25:13.340279   10732 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:25:13.358716   10732 system_svc.go:56] duration metric: took 18.47804ms WaitForService to wait for kubelet.
	I0816 22:25:13.358752   10732 kubeadm.go:547] duration metric: took 2.710991068s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:25:13.358785   10732 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:25:13.536797   10732 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:25:13.536830   10732 node_conditions.go:123] node cpu capacity is 2
	I0816 22:25:13.536848   10732 node_conditions.go:105] duration metric: took 178.056493ms to run NodePressure ...
	I0816 22:25:13.536863   10732 start.go:231] waiting for startup goroutines ...
	I0816 22:25:13.602415   10732 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:25:13.604425   10732 out.go:177] * Done! kubectl is now configured to use "pause-20210816222224-6986" cluster and "default" namespace by default
	I0816 22:25:13.045168   10879 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.02209826s)
	I0816 22:25:13.045290   10879 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0816 22:25:13.045383   10879 ssh_runner.go:149] Run: which lz4
	I0816 22:25:13.050542   10879 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0816 22:25:13.055627   10879 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0816 22:25:13.055661   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (945588089 bytes)
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	f04c445038901       6e38f40d628db       2 seconds ago        Running             storage-provisioner       0                   0e10d9862204b
	e70dd80568a0a       296a6d5035e2d       13 seconds ago       Running             coredns                   1                   c649190b7c07d
	2585772c8a261       adb2816ea823a       14 seconds ago       Running             kube-proxy                2                   d73b4cafe25f0
	53780b2759956       3d174f00aa39e       21 seconds ago       Running             kube-apiserver            2                   fb9f201b2c2e1
	76fef890edebe       6be0dc1302e30       21 seconds ago       Running             kube-scheduler            2                   1718d2a0276ce
	69a7fab4848c4       0369cf4303ffd       21 seconds ago       Running             etcd                      2                   3b9459ff3a0d8
	825e79d62718c       bc2bb319a7038       22 seconds ago       Running             kube-controller-manager   2                   feab707eb735a
	7626b842ef886       3d174f00aa39e       22 seconds ago       Created             kube-apiserver            1                   fb9f201b2c2e1
	9d9f34b35e099       adb2816ea823a       22 seconds ago       Created             kube-proxy                1                   d73b4cafe25f0
	97c4cc3614116       6be0dc1302e30       22 seconds ago       Created             kube-scheduler            1                   1718d2a0276ce
	3644e35e40a2f       0369cf4303ffd       22 seconds ago       Created             etcd                      1                   3b9459ff3a0d8
	8c5f2c007cff4       bc2bb319a7038       26 seconds ago       Created             kube-controller-manager   1                   feab707eb735a
	28c7161cd49a4       296a6d5035e2d       About a minute ago   Exited              coredns                   0                   05c2427240818
	a8503bd796d5d       adb2816ea823a       About a minute ago   Exited              kube-proxy                0                   a86c3b6ee3a70
	124fa393359f7       0369cf4303ffd       2 minutes ago        Exited              etcd                      0                   94a493a65b593
	8710cefecdbe5       6be0dc1302e30       2 minutes ago        Exited              kube-scheduler            0                   982e66890a90d
	38dc61b214a9c       3d174f00aa39e       2 minutes ago        Exited              kube-apiserver            0                   630ed9d4644e9
	
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2021-08-16 22:22:35 UTC, end at Mon 2021-08-16 22:25:15 UTC. --
	Aug 16 22:24:53 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:24:53.606374155Z" level=info msg="StartContainer for \"69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549\" returns successfully"
	Aug 16 22:24:53 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:24:53.687984942Z" level=info msg="StartContainer for \"76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1\" returns successfully"
	Aug 16 22:24:59 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:24:59.121993631Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.146522428Z" level=info msg="CreateContainer within sandbox \"d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:2,}"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.231610260Z" level=info msg="CreateContainer within sandbox \"d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8\" for &ContainerMetadata{Name:kube-proxy,Attempt:2,} returns container id \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.233198734Z" level=info msg="StartContainer for \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.443953465Z" level=info msg="StopPodSandbox for \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.444769081Z" level=info msg="Container to stop \"28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.461877463Z" level=info msg="StartContainer for \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\" returns successfully"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536079353Z" level=info msg="TearDown network for sandbox \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\" successfully"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536191167Z" level=info msg="StopPodSandbox for \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\" returns successfully"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536962082Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-gkxhz,Uid:5aa76749-775e-423d-bbf9-680a20a27051,Namespace:kube-system,Attempt:1,}"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.776744568Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb pid=5007
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.290447333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-gkxhz,Uid:5aa76749-775e-423d-bbf9-680a20a27051,Namespace:kube-system,Attempt:1,} returns sandbox id \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\""
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.300600113Z" level=info msg="CreateContainer within sandbox \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.389478760Z" level=info msg="CreateContainer within sandbox \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\""
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.397162604Z" level=info msg="StartContainer for \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\""
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.594046909Z" level=info msg="StartContainer for \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\" returns successfully"
	Aug 16 22:25:11 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:11.852957632Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:4f138dc7-da0e-4775-b4de-b0f7d616b212,Namespace:kube-system,Attempt:0,}"
	Aug 16 22:25:11 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:11.903771908Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c pid=5174
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.439549893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:4f138dc7-da0e-4775-b4de-b0f7d616b212,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\""
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.451930506Z" level=info msg="CreateContainer within sandbox \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.521875733Z" level=info msg="CreateContainer within sandbox \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.523292924Z" level=info msg="StartContainer for \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.851898064Z" level=info msg="StartContainer for \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\" returns successfully"
	
	* 
	* ==> coredns [28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	I0816 22:24:19.170128       1 trace.go:205] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.168) (total time: 30001ms):
	Trace[2019727887]: [30.001909435s] [30.001909435s] END
	E0816 22:24:19.170279       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0816 22:24:19.171047       1 trace.go:205] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.166) (total time: 30004ms):
	Trace[939984059]: [30.004733433s] [30.004733433s] END
	E0816 22:24:19.171149       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0816 22:24:19.171258       1 trace.go:205] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.166) (total time: 30004ms):
	Trace[911902081]: [30.004945736s] [30.004945736s] END
	E0816 22:24:19.171265       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> coredns [e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7ae91e86dd75dee9ae501cb58003198b
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210816222224-6986
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20210816222224-6986
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=pause-20210816222224-6986
	                    minikube.k8s.io/updated_at=2021_08_16T22_23_26_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Aug 2021 22:23:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210816222224-6986
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Aug 2021 22:25:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Aug 2021 22:24:59 +0000   Mon, 16 Aug 2021 22:23:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Aug 2021 22:24:59 +0000   Mon, 16 Aug 2021 22:23:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Aug 2021 22:24:59 +0000   Mon, 16 Aug 2021 22:23:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Aug 2021 22:24:59 +0000   Mon, 16 Aug 2021 22:23:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.226
	  Hostname:    pause-20210816222224-6986
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2033044Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2033044Ki
	  pods:               110
	System Info:
	  Machine ID:                 940ad300f94c41e2a0b0cde81be11541
	  System UUID:                940ad300-f94c-41e2-a0b0-cde81be11541
	  Boot ID:                    ea001a4b-e783-4f93-b7d3-bb910eb45d3c
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-gkxhz                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     89s
	  kube-system                 etcd-pause-20210816222224-6986                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         111s
	  kube-system                 kube-apiserver-pause-20210816222224-6986             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kube-controller-manager-pause-20210816222224-6986    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-proxy-7l59t                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-scheduler-pause-20210816222224-6986             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  2m3s (x6 over 2m4s)  kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x5 over 2m4s)  kubelet     Node pause-20210816222224-6986 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x5 over 2m4s)  kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientPID
	  Normal  Starting                 103s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  103s                 kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s                 kubelet     Node pause-20210816222224-6986 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s                 kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  103s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                93s                  kubelet     Node pause-20210816222224-6986 status is now: NodeReady
	  Normal  Starting                 86s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 24s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  24s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23s (x8 over 24s)    kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23s (x8 over 24s)    kubelet     Node pause-20210816222224-6986 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23s (x7 over 24s)    kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientPID
	  Normal  Starting                 15s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +3.181431] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.036573] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.985023] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1731 comm=systemd-network
	[  +1.088197] vboxguest: loading out-of-tree module taints kernel.
	[  +0.006251] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.889854] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[ +16.286436] systemd-fstab-generator[2098]: Ignoring "noauto" for root device
	[  +0.258185] systemd-fstab-generator[2128]: Ignoring "noauto" for root device
	[  +0.135377] systemd-fstab-generator[2143]: Ignoring "noauto" for root device
	[  +0.180446] systemd-fstab-generator[2173]: Ignoring "noauto" for root device
	[Aug16 22:23] systemd-fstab-generator[2381]: Ignoring "noauto" for root device
	[ +20.504547] systemd-fstab-generator[2808]: Ignoring "noauto" for root device
	[ +20.717915] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.551219] kauditd_printk_skb: 104 callbacks suppressed
	[Aug16 22:24] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.792051] systemd-fstab-generator[3754]: Ignoring "noauto" for root device
	[  +0.176916] systemd-fstab-generator[3767]: Ignoring "noauto" for root device
	[  +0.230657] systemd-fstab-generator[3792]: Ignoring "noauto" for root device
	[  +4.083098] kauditd_printk_skb: 2 callbacks suppressed
	[  +3.840195] NFSD: Unable to end grace period: -110
	[  +4.324119] systemd-fstab-generator[4543]: Ignoring "noauto" for root device
	[  +6.680726] kauditd_printk_skb: 29 callbacks suppressed
	[Aug16 22:25] kauditd_printk_skb: 14 callbacks suppressed
	[ +12.641213] kauditd_printk_skb: 23 callbacks suppressed
	
	* 
	* ==> etcd [124fa393359f758ea47161b345d2cab4ce486d4473a4caad483449464d44315f] <==
	* 2021-08-16 22:23:41.064197 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210816222224-6986\" " with result "range_response_count:1 size:5052" took too long (6.421187445s) to execute
	2021-08-16 22:23:41.065847 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:341" took too long (6.446897155s) to execute
	2021-08-16 22:23:41.066285 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/node-controller\" " with result "range_response_count:1 size:242" took too long (5.09674902s) to execute
	2021-08-16 22:23:41.068005 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (6.28196539s) to execute
	2021-08-16 22:23:41.068259 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (763.710719ms) to execute
	2021-08-16 22:23:41.880435 W | etcdserver: read-only range request "key:\"/registry/masterleases/192.168.50.226\" " with result "range_response_count:0 size:5" took too long (776.335267ms) to execute
	2021-08-16 22:23:41.881080 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (597.366064ms) to execute
	2021-08-16 22:23:41.882354 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:4569" took too long (763.841142ms) to execute
	2021-08-16 22:23:41.883287 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (621.677263ms) to execute
	2021-08-16 22:23:41.884722 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (481.499599ms) to execute
	2021-08-16 22:23:41.885189 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-pause-20210816222224-6986\" " with result "range_response_count:1 size:5421" took too long (772.180278ms) to execute
	2021-08-16 22:23:42.453217 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (290.061418ms) to execute
	2021-08-16 22:23:42.455427 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (285.893643ms) to execute
	2021-08-16 22:23:42.456943 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-pause-20210816222224-6986\" " with result "range_response_count:1 size:6314" took too long (153.946258ms) to execute
	2021-08-16 22:23:42.458024 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (177.825431ms) to execute
	2021-08-16 22:23:44.267832 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:23:54.092150 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (701.802797ms) to execute
	2021-08-16 22:23:54.093518 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-ctgf5\" " with result "range_response_count:1 size:4473" took too long (1.090386256s) to execute
	2021-08-16 22:23:54.267392 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:23:57.768234 W | etcdserver: request "header:<ID:4263355585347158035 > lease_revoke:<id:3b2a7b510fcb7e67>" with result "size:29" took too long (771.90226ms) to execute
	2021-08-16 22:23:57.768903 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (374.444829ms) to execute
	2021-08-16 22:23:57.769379 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-ctgf5\" " with result "range_response_count:1 size:4473" took too long (765.115046ms) to execute
	2021-08-16 22:24:04.267548 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:24:14.267958 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:24:24.268321 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> etcd [3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423] <==
	* 
	* ==> etcd [69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549] <==
	* raft2021/08/16 22:24:53 INFO: newRaft e840193bf29c3b2a [peers: [], term: 2, commit: 515, applied: 0, lastindex: 515, lastterm: 2]
	2021-08-16 22:24:53.773065 W | auth: simple token is not cryptographically signed
	2021-08-16 22:24:53.837118 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	raft2021/08/16 22:24:53 INFO: e840193bf29c3b2a switched to configuration voters=(16735403960572853034)
	2021-08-16 22:24:53.849298 I | etcdserver/membership: added member e840193bf29c3b2a [https://192.168.50.226:2380] to cluster 99b90e1bea73c730
	2021-08-16 22:24:53.860198 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-16 22:24:53.864997 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-16 22:24:53.865214 I | embed: listening for peers on 192.168.50.226:2380
	2021-08-16 22:24:53.868083 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-16 22:24:53.871735 I | etcdserver/api: enabled capabilities for version 3.4
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a is starting a new election at term 2
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a became candidate at term 3
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a received MsgVoteResp from e840193bf29c3b2a at term 3
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a became leader at term 3
	raft2021/08/16 22:24:54 INFO: raft.node: e840193bf29c3b2a elected leader e840193bf29c3b2a at term 3
	2021-08-16 22:24:54.968820 I | embed: ready to serve client requests
	2021-08-16 22:24:54.969394 I | etcdserver: published {Name:pause-20210816222224-6986 ClientURLs:[https://192.168.50.226:2379]} to cluster 99b90e1bea73c730
	2021-08-16 22:24:54.971284 I | embed: serving client requests on 192.168.50.226:2379
	2021-08-16 22:24:54.971462 I | embed: ready to serve client requests
	2021-08-16 22:24:54.973508 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-16 22:25:03.067902 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-gkxhz\" " with result "range_response_count:1 size:4860" took too long (140.807991ms) to execute
	2021-08-16 22:25:06.747736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:25:08.138740 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:25:10.645123 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:3838" took too long (108.124514ms) to execute
	2021-08-16 22:25:10.645989 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:665" took too long (107.967343ms) to execute
	
	* 
	* ==> kernel <==
	*  22:25:15 up 2 min,  0 users,  load average: 3.55, 1.59, 0.61
	Linux pause-20210816222224-6986 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [38dc61b214a9cbd019de4ca9ab52fb6baf728336de6d715df22b027522ad8b20] <==
	* I0816 22:23:41.890272       1 trace.go:205] Trace[914939944]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:23:41.117) (total time: 772ms):
	Trace[914939944]: [772.29448ms] [772.29448ms] END
	I0816 22:23:41.897880       1 trace.go:205] Trace[372773048]: "List" url:/api/v1/nodes,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/system:serviceaccount:kube-system:node-controller,client:192.168.50.226,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:41.117) (total time: 780ms):
	Trace[372773048]: ---"Listing from storage done" 773ms (22:23:00.891)
	Trace[372773048]: [780.024685ms] [780.024685ms] END
	I0816 22:23:41.899245       1 trace.go:205] Trace[189474875]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-20210816222224-6986,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.50.226,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (16-Aug-2021 22:23:41.107) (total time: 791ms):
	Trace[189474875]: ---"About to write a response" 791ms (22:23:00.899)
	Trace[189474875]: [791.769473ms] [791.769473ms] END
	I0816 22:23:41.914143       1 trace.go:205] Trace[1803257945]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (16-Aug-2021 22:23:41.101) (total time: 812ms):
	Trace[1803257945]: ---"initial value restored" 795ms (22:23:00.897)
	Trace[1803257945]: [812.099383ms] [812.099383ms] END
	I0816 22:23:46.219827       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0816 22:23:46.322056       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0816 22:23:54.101003       1 trace.go:205] Trace[1429856954]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-ctgf5,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.50.1,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:53.002) (total time: 1098ms):
	Trace[1429856954]: ---"About to write a response" 1098ms (22:23:00.100)
	Trace[1429856954]: [1.0988209s] [1.0988209s] END
	I0816 22:23:56.194218       1 client.go:360] parsed scheme: "passthrough"
	I0816 22:23:56.194943       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 22:23:56.195388       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0816 22:23:57.770900       1 trace.go:205] Trace[2103117378]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-ctgf5,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.50.1,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:57.002) (total time: 767ms):
	Trace[2103117378]: ---"About to write a response" 767ms (22:23:00.770)
	Trace[2103117378]: [767.944134ms] [767.944134ms] END
	I0816 22:24:32.818404       1 client.go:360] parsed scheme: "passthrough"
	I0816 22:24:32.818597       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 22:24:32.818691       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-apiserver [53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf] <==
	* I0816 22:24:59.052878       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0816 22:24:59.052897       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0816 22:24:59.071128       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0816 22:24:59.071704       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0816 22:24:59.072328       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0816 22:24:59.072872       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
	I0816 22:24:59.173327       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0816 22:24:59.176720       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	E0816 22:24:59.181278       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0816 22:24:59.206356       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0816 22:24:59.225165       1 cache.go:39] Caches are synced for autoregister controller
	I0816 22:24:59.227741       1 apf_controller.go:299] Running API Priority and Fairness config worker
	I0816 22:24:59.230223       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0816 22:24:59.244026       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 22:24:59.248943       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0816 22:25:00.021310       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0816 22:25:00.022052       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0816 22:25:00.034218       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0816 22:25:01.108795       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0816 22:25:01.182177       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0816 22:25:01.279321       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0816 22:25:01.344553       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 22:25:01.382891       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 22:25:11.471022       1 controller.go:611] quota admission added evaluator for: endpoints
	I0816 22:25:13.002505       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612] <==
	* 
	* ==> kube-controller-manager [825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7] <==
	* I0816 22:25:12.900492       1 shared_informer.go:247] Caches are synced for GC 
	I0816 22:25:12.900735       1 shared_informer.go:247] Caches are synced for job 
	I0816 22:25:12.908539       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0816 22:25:12.910182       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0816 22:25:12.925990       1 shared_informer.go:247] Caches are synced for stateful set 
	I0816 22:25:12.926195       1 shared_informer.go:247] Caches are synced for HPA 
	I0816 22:25:12.931999       1 shared_informer.go:247] Caches are synced for attach detach 
	I0816 22:25:12.933971       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0816 22:25:12.934151       1 shared_informer.go:247] Caches are synced for deployment 
	I0816 22:25:12.943776       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0816 22:25:12.963727       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0816 22:25:12.969209       1 shared_informer.go:247] Caches are synced for taint 
	I0816 22:25:12.969381       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	W0816 22:25:12.969524       1 node_lifecycle_controller.go:1013] Missing timestamp for Node pause-20210816222224-6986. Assuming now as a timestamp.
	I0816 22:25:12.969564       1 node_lifecycle_controller.go:1214] Controller detected that zone  is now in state Normal.
	I0816 22:25:12.970457       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0816 22:25:12.970831       1 event.go:291] "Event occurred" object="pause-20210816222224-6986" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20210816222224-6986 event: Registered Node pause-20210816222224-6986 in Controller"
	I0816 22:25:12.974749       1 shared_informer.go:247] Caches are synced for endpoint 
	I0816 22:25:13.000548       1 shared_informer.go:247] Caches are synced for disruption 
	I0816 22:25:13.000739       1 disruption.go:371] Sending events to api server.
	I0816 22:25:13.004608       1 shared_informer.go:247] Caches are synced for resource quota 
	I0816 22:25:13.016848       1 shared_informer.go:247] Caches are synced for resource quota 
	I0816 22:25:13.386564       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0816 22:25:13.386597       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0816 22:25:13.440139       1 shared_informer.go:247] Caches are synced for garbage collector 
	
	* 
	* ==> kube-controller-manager [8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4] <==
	* 
	* ==> kube-proxy [2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2] <==
	* I0816 22:25:00.641886       1 node.go:172] Successfully retrieved node IP: 192.168.50.226
	I0816 22:25:00.641938       1 server_others.go:140] Detected node IP 192.168.50.226
	W0816 22:25:00.642012       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0816 22:25:00.805515       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0816 22:25:00.805539       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0816 22:25:00.805560       1 server_others.go:212] Using iptables Proxier.
	I0816 22:25:00.806059       1 server.go:643] Version: v1.21.3
	I0816 22:25:00.807251       1 config.go:315] Starting service config controller
	I0816 22:25:00.807281       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:25:00.807307       1 config.go:224] Starting endpoint slice config controller
	I0816 22:25:00.807313       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:25:00.812511       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 22:25:00.816722       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:25:00.907844       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 22:25:00.907906       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d] <==
	* 
	* ==> kube-proxy [a8503bd796d5d979a6e1b8b5154986e8b77de391b4f091211451ea5f52808e52] <==
	* I0816 22:23:49.316430       1 node.go:172] Successfully retrieved node IP: 192.168.50.226
	I0816 22:23:49.316608       1 server_others.go:140] Detected node IP 192.168.50.226
	W0816 22:23:49.316822       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0816 22:23:49.402698       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0816 22:23:49.403462       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0816 22:23:49.404047       1 server_others.go:212] Using iptables Proxier.
	I0816 22:23:49.407950       1 server.go:643] Version: v1.21.3
	I0816 22:23:49.410864       1 config.go:315] Starting service config controller
	I0816 22:23:49.413112       1 config.go:224] Starting endpoint slice config controller
	I0816 22:23:49.419474       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:23:49.421254       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:23:49.413718       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:23:49.425958       1 shared_informer.go:247] Caches are synced for service config 
	W0816 22:23:49.425586       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:23:49.520425       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1] <==
	* I0816 22:24:54.634243       1 serving.go:347] Generated self-signed cert in-memory
	W0816 22:24:59.095457       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 22:24:59.098028       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 22:24:59.098491       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 22:24:59.098734       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 22:24:59.166481       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0816 22:24:59.178395       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:24:59.177851       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0816 22:24:59.194249       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:24:59.304036       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [8710cefecdbe5d31cd44e9ae3378bc08cbc56001326a1cb38026755196cac7d1] <==
	* E0816 22:23:21.172468       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:23:21.189536       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:23:21.300836       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.329219       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.448607       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:23:21.504104       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.504531       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:23:21.597849       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:23:21.612843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:23:21.671333       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:23:21.827198       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:23:21.852843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:23:21.867015       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.910139       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:23:23.291774       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.356078       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.452841       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:23:23.464942       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:23:23.644764       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:23:23.649142       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.710606       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.980099       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:23:24.052112       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:23:24.168543       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0816 22:23:30.043826       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a] <==
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:22:35 UTC, end at Mon 2021-08-16 22:25:16 UTC. --
	Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.514985    4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
	Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.616076    4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
	Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.718006    4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
	Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.819104    4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
	Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.919357    4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:59.020392    4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.121233    4551 kuberuntime_manager.go:1044] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.122462    4551 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.228577    4551 kubelet_node_status.go:109] "Node was previously registered" node="pause-20210816222224-6986"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.228853    4551 kubelet_node_status.go:74] "Successfully registered node" node="pause-20210816222224-6986"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.536346    4551 apiserver.go:52] "Watching apiserver"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.540959    4551 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.541581    4551 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.609734    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c0e0899-31c1-477a-a6d4-2844091deea2-kube-proxy\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.610130    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c0e0899-31c1-477a-a6d4-2844091deea2-xtables-lock\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.610271    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c0e0899-31c1-477a-a6d4-2844091deea2-lib-modules\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.610503    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2grh\" (UniqueName: \"kubernetes.io/projected/3c0e0899-31c1-477a-a6d4-2844091deea2-kube-api-access-b2grh\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.711424    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgpd2\" (UniqueName: \"kubernetes.io/projected/5aa76749-775e-423d-bbf9-680a20a27051-kube-api-access-rgpd2\") pod \"coredns-558bd4d5db-gkxhz\" (UID: \"5aa76749-775e-423d-bbf9-680a20a27051\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.712578    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5aa76749-775e-423d-bbf9-680a20a27051-config-volume\") pod \"coredns-558bd4d5db-gkxhz\" (UID: \"5aa76749-775e-423d-bbf9-680a20a27051\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.713123    4551 reconciler.go:157] "Reconciler: start to sync state"
	Aug 16 22:25:00 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:00.142816    4551 scope.go:111] "RemoveContainer" containerID="9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d"
	Aug 16 22:25:03 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:03.115940    4551 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Aug 16 22:25:11 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:11.548694    4551 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:25:11 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:11.620746    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4f138dc7-da0e-4775-b4de-b0f7d616b212-tmp\") pod \"storage-provisioner\" (UID: \"4f138dc7-da0e-4775-b4de-b0f7d616b212\") "
	Aug 16 22:25:11 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:11.621027    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7pzn\" (UniqueName: \"kubernetes.io/projected/4f138dc7-da0e-4775-b4de-b0f7d616b212-kube-api-access-n7pzn\") pod \"storage-provisioner\" (UID: \"4f138dc7-da0e-4775-b4de-b0f7d616b212\") "
	
	* 
	* ==> storage-provisioner [f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a] <==
	* I0816 22:25:12.920503       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 22:25:12.958814       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 22:25:12.959432       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0816 22:25:13.028463       1 leaderelection.go:361] Failed to update lock: Operation cannot be fulfilled on endpoints "k8s.io-minikube-hostpath": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/kube-system/k8s.io-minikube-hostpath, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 3f27bbad-30a1-4386-9d09-80525f79ada9, UID in object meta: 
	I0816 22:25:16.530709       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 22:25:16.540393       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210816222224-6986_409bd634-6095-4f9a-ab3f-09a5e699e184!
	I0816 22:25:16.544131       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"32edeef2-57a3-43b1-a3d9-e7ecc2ed1a14", APIVersion:"v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210816222224-6986_409bd634-6095-4f9a-ab3f-09a5e699e184 became leader
	I0816 22:25:16.647143       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210816222224-6986_409bd634-6095-4f9a-ab3f-09a5e699e184!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:25:15.726460   11246 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:25:15Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log\": lstat /var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:25:15Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log\\\": lstat /var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:25:16.028111   11246 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:25:16Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log\": lstat /var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:25:16Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log\\\": lstat /var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:25:16.153609   11246 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:25:16Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log\": lstat /var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:25:16Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log\\\": lstat /var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:25:16.301310   11246 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:25:16Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log\": lstat /var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:25:16Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log\\\": lstat /var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:25:16.619092   11246 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:25:16Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log\": lstat /var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:25:16Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log\\\": lstat /var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log: no such file or directory\"\n\n** /stderr **"
	! unable to fetch logs for: etcd [3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423], kube-apiserver [7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612], kube-controller-manager [8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4], kube-proxy [9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d], kube-scheduler [97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a]

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816222224-6986 -n pause-20210816222224-6986
helpers_test.go:245: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210816222224-6986 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210816222224-6986 logs -n 25: exit status 110 (2.026956303s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                  |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | multinode-20210816215441-6986          | multinode-20210816215441-6986          | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:08:07 UTC | Mon, 16 Aug 2021 22:11:11 UTC |
	|         | stop                                   |                                        |         |         |                               |                               |
	| start   | -p                                     | multinode-20210816215441-6986          | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:11:11 UTC | Mon, 16 Aug 2021 22:15:19 UTC |
	|         | multinode-20210816215441-6986          |                                        |         |         |                               |                               |
	|         | --wait=true -v=8                       |                                        |         |         |                               |                               |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | --driver=kvm2                          |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| start   | -p                                     | multinode-20210816215441-6986-m03      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:19 UTC | Mon, 16 Aug 2021 22:16:20 UTC |
	|         | multinode-20210816215441-6986-m03      |                                        |         |         |                               |                               |
	|         | --driver=kvm2                          |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| delete  | -p                                     | multinode-20210816215441-6986-m03      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:16:20 UTC | Mon, 16 Aug 2021 22:16:21 UTC |
	|         | multinode-20210816215441-6986-m03      |                                        |         |         |                               |                               |
	| delete  | -p                                     | multinode-20210816215441-6986          | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:16:21 UTC | Mon, 16 Aug 2021 22:16:23 UTC |
	|         | multinode-20210816215441-6986          |                                        |         |         |                               |                               |
	| start   | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:18:07 UTC | Mon, 16 Aug 2021 22:19:45 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | --wait=true --preload=false            |                                        |         |         |                               |                               |
	|         | --driver=kvm2                          |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0           |                                        |         |         |                               |                               |
	| ssh     | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:45 UTC | Mon, 16 Aug 2021 22:19:47 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | -- sudo crictl pull busybox            |                                        |         |         |                               |                               |
	| start   | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:48 UTC | Mon, 16 Aug 2021 22:20:39 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=kvm2         |                                        |         |         |                               |                               |
	|         |  --container-runtime=containerd        |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3           |                                        |         |         |                               |                               |
	| ssh     | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:39 UTC | Mon, 16 Aug 2021 22:20:39 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | -- sudo crictl image ls                |                                        |         |         |                               |                               |
	| delete  | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:39 UTC | Mon, 16 Aug 2021 22:20:40 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	| start   | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:40 UTC | Mon, 16 Aug 2021 22:21:45 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	|         | --memory=2048 --driver=kvm2            |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| stop    | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:46 UTC | Mon, 16 Aug 2021 22:21:46 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	|         | --cancel-scheduled                     |                                        |         |         |                               |                               |
	| stop    | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:58 UTC | Mon, 16 Aug 2021 22:22:05 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	|         | --schedule 5s                          |                                        |         |         |                               |                               |
	| delete  | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:23 UTC | Mon, 16 Aug 2021 22:22:24 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	| delete  | -p kubenet-20210816222224-6986         | kubenet-20210816222224-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:22:25 UTC |
	| delete  | -p false-20210816222225-6986           | false-20210816222225-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:25 UTC | Mon, 16 Aug 2021 22:22:25 UTC |
	| start   | -p                                     | force-systemd-env-20210816222224-6986  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:24:04 UTC |
	|         | force-systemd-env-20210816222224-6986  |                                        |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | -v=5 --driver=kvm2                     |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| -p      | force-systemd-env-20210816222224-6986  | force-systemd-env-20210816222224-6986  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:04 UTC | Mon, 16 Aug 2021 22:24:04 UTC |
	|         | ssh cat /etc/containerd/config.toml    |                                        |         |         |                               |                               |
	| delete  | -p                                     | force-systemd-env-20210816222224-6986  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:04 UTC | Mon, 16 Aug 2021 22:24:05 UTC |
	|         | force-systemd-env-20210816222224-6986  |                                        |         |         |                               |                               |
	| start   | -p pause-20210816222224-6986           | pause-20210816222224-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:24:28 UTC |
	|         | --memory=2048                          |                                        |         |         |                               |                               |
	|         | --install-addons=false                 |                                        |         |         |                               |                               |
	|         | --wait=all --driver=kvm2               |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| start   | -p                                     | kubernetes-upgrade-20210816222225-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:25 UTC | Mon, 16 Aug 2021 22:24:48 UTC |
	|         | kubernetes-upgrade-20210816222225-6986 |                                        |         |         |                               |                               |
	|         | --memory=2200                          |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0           |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=kvm2   |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| stop    | -p                                     | kubernetes-upgrade-20210816222225-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:49 UTC | Mon, 16 Aug 2021 22:24:53 UTC |
	|         | kubernetes-upgrade-20210816222225-6986 |                                        |         |         |                               |                               |
	| start   | -p                                     | offline-containerd-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:25:02 UTC |
	|         | offline-containerd-20210816222224-6986 |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048   |                                        |         |         |                               |                               |
	|         | --wait=true --driver=kvm2              |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| delete  | -p                                     | offline-containerd-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:02 UTC | Mon, 16 Aug 2021 22:25:03 UTC |
	|         | offline-containerd-20210816222224-6986 |                                        |         |         |                               |                               |
	| start   | -p pause-20210816222224-6986           | pause-20210816222224-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:28 UTC | Mon, 16 Aug 2021 22:25:13 UTC |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                     |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:24:54
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:24:54.079177   10879 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:24:54.079273   10879 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:24:54.079278   10879 out.go:311] Setting ErrFile to fd 2...
	I0816 22:24:54.079280   10879 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:24:54.079426   10879 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:24:54.079721   10879 out.go:305] Setting JSON to false
	I0816 22:24:54.187099   10879 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":4056,"bootTime":1629148638,"procs":185,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:24:54.187527   10879 start.go:121] virtualization: kvm guest
	I0816 22:24:54.190315   10879 out.go:177] * [kubernetes-upgrade-20210816222225-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:24:54.192235   10879 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:24:54.190469   10879 notify.go:169] Checking for updates...
	I0816 22:24:54.193922   10879 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:24:54.195578   10879 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:24:54.197163   10879 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:24:54.197582   10879 config.go:177] Loaded profile config "kubernetes-upgrade-20210816222225-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0816 22:24:54.197998   10879 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:24:54.198058   10879 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:24:54.215228   10879 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45677
	I0816 22:24:54.215770   10879 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:24:54.216328   10879 main.go:130] libmachine: Using API Version  1
	I0816 22:24:54.216350   10879 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:24:54.216734   10879 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:24:54.216908   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:24:54.217075   10879 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:24:54.217475   10879 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:24:54.217512   10879 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:24:54.229224   10879 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34399
	I0816 22:24:54.229593   10879 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:24:54.230067   10879 main.go:130] libmachine: Using API Version  1
	I0816 22:24:54.230093   10879 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:24:54.230460   10879 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:24:54.230643   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:24:54.279869   10879 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 22:24:54.279899   10879 start.go:278] selected driver: kvm2
	I0816 22:24:54.279906   10879 start.go:751] validating driver "kvm2" against &{Name:kubernetes-upgrade-20210816222225-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210816222225-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.116.91 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:24:54.280014   10879 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 22:24:54.281335   10879 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:24:54.282098   10879 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 22:24:54.294712   10879 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0816 22:24:54.295176   10879 cni.go:93] Creating CNI manager for ""
	I0816 22:24:54.295202   10879 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:24:54.295212   10879 start_flags.go:277] config:
	{Name:kubernetes-upgrade-20210816222225-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210816222225-
6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.116.91 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:24:54.295364   10879 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:24:54.297417   10879 out.go:177] * Starting control plane node kubernetes-upgrade-20210816222225-6986 in cluster kubernetes-upgrade-20210816222225-6986
	I0816 22:24:54.297445   10879 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 22:24:54.297484   10879 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0816 22:24:54.297505   10879 cache.go:56] Caching tarball of preloaded images
	I0816 22:24:54.297634   10879 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 22:24:54.297656   10879 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on containerd
	I0816 22:24:54.297784   10879 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/config.json ...
	I0816 22:24:54.297977   10879 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:24:54.298007   10879 start.go:313] acquiring machines lock for kubernetes-upgrade-20210816222225-6986: {Name:mk808edd60d1305a42bb85791729eff4573dbb15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 22:24:54.298081   10879 start.go:317] acquired machines lock for "kubernetes-upgrade-20210816222225-6986" in 55.05µs
	I0816 22:24:54.298103   10879 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:24:54.298109   10879 fix.go:55] fixHost starting: 
	I0816 22:24:54.298510   10879 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:24:54.298561   10879 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:24:54.309226   10879 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33255
	I0816 22:24:54.309690   10879 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:24:54.310211   10879 main.go:130] libmachine: Using API Version  1
	I0816 22:24:54.310242   10879 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:24:54.310587   10879 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:24:54.310840   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:24:54.310996   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetState
	I0816 22:24:54.314433   10879 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20210816222225-6986: state=Stopped err=<nil>
	I0816 22:24:54.314482   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	W0816 22:24:54.314626   10879 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:24:52.760695    9171 pod_ready.go:102] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:53.612575   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:24:53.632520   10732 api_server.go:70] duration metric: took 7.033030474s to wait for apiserver process to appear ...
	I0816 22:24:53.632561   10732 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:24:53.632570   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:24:53.633109   10732 api_server.go:255] stopped: https://192.168.50.226:8443/healthz: Get "https://192.168.50.226:8443/healthz": dial tcp 192.168.50.226:8443: connect: connection refused
	I0816 22:24:54.133848   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:24:54.316518   10879 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-20210816222225-6986" ...
	I0816 22:24:54.316550   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .Start
	I0816 22:24:54.316716   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Ensuring networks are active...
	I0816 22:24:54.318718   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Ensuring network default is active
	I0816 22:24:54.319156   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Ensuring network mk-kubernetes-upgrade-20210816222225-6986 is active
	I0816 22:24:54.319641   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Getting domain xml...
	I0816 22:24:54.321602   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Creating domain...
	I0816 22:24:54.783576   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Waiting to get IP...
	I0816 22:24:54.784705   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:24:54.785273   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has current primary IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:24:54.785327   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Found IP for machine: 192.168.116.91
	I0816 22:24:54.785348   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Reserving static IP address...
	I0816 22:24:54.785810   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-20210816222225-6986", mac: "52:54:00:92:67:21", ip: "192.168.116.91"} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:23:40 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:24:54.785842   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Reserved static IP address: 192.168.116.91
	I0816 22:24:54.785867   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | skip adding static IP to network mk-kubernetes-upgrade-20210816222225-6986 - found existing host DHCP lease matching {name: "kubernetes-upgrade-20210816222225-6986", mac: "52:54:00:92:67:21", ip: "192.168.116.91"}
	I0816 22:24:54.785897   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | Getting to WaitForSSH function...
	I0816 22:24:54.785911   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Waiting for SSH to be available...
	I0816 22:24:54.791673   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:24:54.792070   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:23:40 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:24:54.792097   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:24:54.792320   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | Using SSH client type: external
	I0816 22:24:54.792359   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa (-rw-------)
	I0816 22:24:54.792401   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.116.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:24:54.792424   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | About to run SSH command:
	I0816 22:24:54.792441   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | exit 0
	I0816 22:24:55.186584    9171 pod_ready.go:102] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:57.682612    9171 pod_ready.go:102] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:59.683949    9171 pod_ready.go:102] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:59.090396   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:24:59.090431   10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:24:59.133677   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:24:59.161347   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:24:59.161378   10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:24:59.633911   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:24:59.639524   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:24:59.639548   10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:25:00.133775   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:25:00.151749   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:25:00.151784   10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:25:00.633968   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:25:00.646578   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 200:
	ok
	I0816 22:25:00.661937   10732 api_server.go:139] control plane version: v1.21.3
	I0816 22:25:00.661961   10732 api_server.go:129] duration metric: took 7.029396002s to wait for apiserver health ...
	I0816 22:25:00.661972   10732 cni.go:93] Creating CNI manager for ""
	I0816 22:25:00.661979   10732 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:25:01.185512    9171 pod_ready.go:92] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:01.185545    9171 pod_ready.go:81] duration metric: took 23.534022707s waiting for pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.185559    9171 pod_ready.go:78] waiting up to 6m0s for pod "etcd-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.215463    9171 pod_ready.go:92] pod "etcd-offline-containerd-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:01.215489    9171 pod_ready.go:81] duration metric: took 29.921986ms waiting for pod "etcd-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.215503    9171 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.230267    9171 pod_ready.go:92] pod "kube-apiserver-offline-containerd-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:01.230289    9171 pod_ready.go:81] duration metric: took 14.776227ms waiting for pod "kube-apiserver-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.230302    9171 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.241691    9171 pod_ready.go:92] pod "kube-controller-manager-offline-containerd-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:01.241717    9171 pod_ready.go:81] duration metric: took 11.405045ms waiting for pod "kube-controller-manager-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.241733    9171 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dhhrk" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.251986    9171 pod_ready.go:92] pod "kube-proxy-dhhrk" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:01.252017    9171 pod_ready.go:81] duration metric: took 10.275945ms waiting for pod "kube-proxy-dhhrk" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.252030    9171 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.580001    9171 pod_ready.go:92] pod "kube-scheduler-offline-containerd-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:01.580033    9171 pod_ready.go:81] duration metric: took 327.992243ms waiting for pod "kube-scheduler-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.580046    9171 pod_ready.go:38] duration metric: took 36.483444375s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:25:01.580071    9171 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:25:01.580124    9171 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:01.597074    9171 api_server.go:70] duration metric: took 36.950719971s to wait for apiserver process to appear ...
	I0816 22:25:01.597104    9171 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:25:01.597117    9171 api_server.go:239] Checking apiserver healthz at https://192.168.105.22:8443/healthz ...
	I0816 22:25:01.604325    9171 api_server.go:265] https://192.168.105.22:8443/healthz returned 200:
	ok
	I0816 22:25:01.606279    9171 api_server.go:139] control plane version: v1.21.3
	I0816 22:25:01.606301    9171 api_server.go:129] duration metric: took 9.189625ms to wait for apiserver health ...
	I0816 22:25:01.606312    9171 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:25:01.788694    9171 system_pods.go:59] 7 kube-system pods found
	I0816 22:25:01.788767    9171 system_pods.go:61] "coredns-558bd4d5db-jrjhw" [acdb9f4c-484e-4e02-97c3-368ce130507e] Running
	I0816 22:25:01.788794    9171 system_pods.go:61] "etcd-offline-containerd-20210816222224-6986" [5cab4619-a033-47c0-9009-225ece0f2892] Running
	I0816 22:25:01.788801    9171 system_pods.go:61] "kube-apiserver-offline-containerd-20210816222224-6986" [ea1abce8-a6d2-4e57-81c9-97bdd5eefea4] Running
	I0816 22:25:01.788808    9171 system_pods.go:61] "kube-controller-manager-offline-containerd-20210816222224-6986" [9e75aa0c-4fd9-4812-9163-c6c1a26c9f2e] Running
	I0816 22:25:01.788813    9171 system_pods.go:61] "kube-proxy-dhhrk" [a48ab7f9-7dfc-47de-8aca-c172bea7ff31] Running
	I0816 22:25:01.788819    9171 system_pods.go:61] "kube-scheduler-offline-containerd-20210816222224-6986" [3dd47537-37cc-49f2-a469-8ef39825ba4a] Running
	I0816 22:25:01.788827    9171 system_pods.go:61] "storage-provisioner" [e6290b9f-d87d-488d-8f9e-7cbbc59d9585] Running
	I0816 22:25:01.788835    9171 system_pods.go:74] duration metric: took 182.517591ms to wait for pod list to return data ...
	I0816 22:25:01.788850    9171 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:25:01.981356    9171 default_sa.go:45] found service account: "default"
	I0816 22:25:01.981387    9171 default_sa.go:55] duration metric: took 192.530827ms for default service account to be created ...
	I0816 22:25:01.981399    9171 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:25:02.190487    9171 system_pods.go:86] 7 kube-system pods found
	I0816 22:25:02.190528    9171 system_pods.go:89] "coredns-558bd4d5db-jrjhw" [acdb9f4c-484e-4e02-97c3-368ce130507e] Running
	I0816 22:25:02.190538    9171 system_pods.go:89] "etcd-offline-containerd-20210816222224-6986" [5cab4619-a033-47c0-9009-225ece0f2892] Running
	I0816 22:25:02.190546    9171 system_pods.go:89] "kube-apiserver-offline-containerd-20210816222224-6986" [ea1abce8-a6d2-4e57-81c9-97bdd5eefea4] Running
	I0816 22:25:02.190554    9171 system_pods.go:89] "kube-controller-manager-offline-containerd-20210816222224-6986" [9e75aa0c-4fd9-4812-9163-c6c1a26c9f2e] Running
	I0816 22:25:02.190560    9171 system_pods.go:89] "kube-proxy-dhhrk" [a48ab7f9-7dfc-47de-8aca-c172bea7ff31] Running
	I0816 22:25:02.190567    9171 system_pods.go:89] "kube-scheduler-offline-containerd-20210816222224-6986" [3dd47537-37cc-49f2-a469-8ef39825ba4a] Running
	I0816 22:25:02.190573    9171 system_pods.go:89] "storage-provisioner" [e6290b9f-d87d-488d-8f9e-7cbbc59d9585] Running
	I0816 22:25:02.190582    9171 system_pods.go:126] duration metric: took 209.176198ms to wait for k8s-apps to be running ...
	I0816 22:25:02.190596    9171 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:25:02.190648    9171 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:25:02.207959    9171 system_svc.go:56] duration metric: took 17.354686ms WaitForService to wait for kubelet.
	I0816 22:25:02.207991    9171 kubeadm.go:547] duration metric: took 37.56164237s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:25:02.208036    9171 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:25:02.385401    9171 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:25:02.385432    9171 node_conditions.go:123] node cpu capacity is 2
	I0816 22:25:02.385444    9171 node_conditions.go:105] duration metric: took 177.399541ms to run NodePressure ...
	I0816 22:25:02.385455    9171 start.go:231] waiting for startup goroutines ...
	I0816 22:25:02.438114    9171 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:25:02.440691    9171 out.go:177] * Done! kubectl is now configured to use "offline-containerd-20210816222224-6986" cluster and "default" namespace by default
	I0816 22:25:00.663954   10732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:25:00.664005   10732 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:25:00.674379   10732 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:25:00.699896   10732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:25:00.718704   10732 system_pods.go:59] 6 kube-system pods found
	I0816 22:25:00.718763   10732 system_pods.go:61] "coredns-558bd4d5db-gkxhz" [5aa76749-775e-423d-bbf9-680a20a27051] Running
	I0816 22:25:00.718780   10732 system_pods.go:61] "etcd-pause-20210816222224-6986" [f621b99e-0604-4bed-8c4e-4f5741e52f7b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 22:25:00.718802   10732 system_pods.go:61] "kube-apiserver-pause-20210816222224-6986" [b1c46709-4b0b-4c9c-a701-d595a58214ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 22:25:00.718811   10732 system_pods.go:61] "kube-controller-manager-pause-20210816222224-6986" [777c035e-5f34-469a-afb5-4f8ef90ccbfb] Running
	I0816 22:25:00.718819   10732 system_pods.go:61] "kube-proxy-7l59t" [3c0e0899-31c1-477a-a6d4-2844091deea2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:25:00.718830   10732 system_pods.go:61] "kube-scheduler-pause-20210816222224-6986" [6b32acf9-8108-45a6-901e-70cd125190f8] Running
	I0816 22:25:00.718838   10732 system_pods.go:74] duration metric: took 18.921493ms to wait for pod list to return data ...
	I0816 22:25:00.718847   10732 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:25:00.723789   10732 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:25:00.723820   10732 node_conditions.go:123] node cpu capacity is 2
	I0816 22:25:00.723836   10732 node_conditions.go:105] duration metric: took 4.978152ms to run NodePressure ...
	I0816 22:25:00.723854   10732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:01.396623   10732 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:25:01.403109   10732 kubeadm.go:746] kubelet initialised
	I0816 22:25:01.403139   10732 kubeadm.go:747] duration metric: took 6.492031ms waiting for restarted kubelet to initialise ...
	I0816 22:25:01.403151   10732 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:25:01.409386   10732 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:03.432924   10732 pod_ready.go:102] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:05.435685   10732 pod_ready.go:102] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:05.951433   10732 pod_ready.go:92] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:05.951457   10732 pod_ready.go:81] duration metric: took 4.542029801s waiting for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:05.951470   10732 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:06.969870   10732 pod_ready.go:92] pod "etcd-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:06.969903   10732 pod_ready.go:81] duration metric: took 1.018424787s waiting for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:06.969918   10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:06.978963   10732 pod_ready.go:92] pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:06.978984   10732 pod_ready.go:81] duration metric: took 9.058114ms waiting for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:06.978997   10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:07.986911   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | SSH cmd err, output: <nil>: 
	I0816 22:25:07.987289   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetConfigRaw
	I0816 22:25:07.988117   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetIP
	I0816 22:25:07.993471   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:07.993933   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:07.993970   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:07.994335   10879 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/config.json ...
	I0816 22:25:07.994547   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:07.994761   10879 machine.go:88] provisioning docker machine ...
	I0816 22:25:07.994788   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:07.994976   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetMachineName
	I0816 22:25:07.995114   10879 buildroot.go:166] provisioning hostname "kubernetes-upgrade-20210816222225-6986"
	I0816 22:25:07.995139   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetMachineName
	I0816 22:25:07.995291   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.000173   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.000497   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.000524   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.000680   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
	I0816 22:25:08.000825   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.000965   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.001081   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
	I0816 22:25:08.001235   10879 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:08.001401   10879 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.116.91 22 <nil> <nil>}
	I0816 22:25:08.001421   10879 main.go:130] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20210816222225-6986 && echo "kubernetes-upgrade-20210816222225-6986" | sudo tee /etc/hostname
	I0816 22:25:08.156978   10879 main.go:130] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20210816222225-6986
	
	I0816 22:25:08.157018   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.162417   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.162702   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.162735   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.162864   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
	I0816 22:25:08.163064   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.163277   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.163406   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
	I0816 22:25:08.163558   10879 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:08.163733   10879 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.116.91 22 <nil> <nil>}
	I0816 22:25:08.163761   10879 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20210816222225-6986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20210816222225-6986/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20210816222225-6986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:25:08.307005   10879 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:25:08.307035   10879 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:25:08.307053   10879 buildroot.go:174] setting up certificates
	I0816 22:25:08.307064   10879 provision.go:83] configureAuth start
	I0816 22:25:08.307075   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetMachineName
	I0816 22:25:08.307332   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetIP
	I0816 22:25:08.313331   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.313697   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.313729   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.313896   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.318531   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.318844   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.318878   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.318990   10879 provision.go:138] copyHostCerts
	I0816 22:25:08.319059   10879 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:25:08.319073   10879 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:25:08.319128   10879 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:25:08.319254   10879 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:25:08.319268   10879 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:25:08.319294   10879 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:25:08.319359   10879 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:25:08.319368   10879 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:25:08.319397   10879 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
	I0816 22:25:08.319465   10879 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20210816222225-6986 san=[192.168.116.91 192.168.116.91 localhost 127.0.0.1 minikube kubernetes-upgrade-20210816222225-6986]
	I0816 22:25:08.473458   10879 provision.go:172] copyRemoteCerts
	I0816 22:25:08.473513   10879 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:25:08.473535   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.478720   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.479123   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.479157   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.479301   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
	I0816 22:25:08.479517   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.479669   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
	I0816 22:25:08.479802   10879 sshutil.go:53] new ssh client: &{IP:192.168.116.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa Username:docker}
	I0816 22:25:08.575404   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:25:08.593200   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
	I0816 22:25:08.611874   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:25:08.631651   10879 provision.go:86] duration metric: configureAuth took 324.57656ms
	I0816 22:25:08.631679   10879 buildroot.go:189] setting minikube options for container-runtime
	I0816 22:25:08.631847   10879 config.go:177] Loaded profile config "kubernetes-upgrade-20210816222225-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0816 22:25:08.631862   10879 machine.go:91] provisioned docker machine in 637.081285ms
	I0816 22:25:08.631877   10879 start.go:267] post-start starting for "kubernetes-upgrade-20210816222225-6986" (driver="kvm2")
	I0816 22:25:08.631885   10879 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:25:08.631905   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:08.632222   10879 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:25:08.632262   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.638223   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.638599   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.638628   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.638804   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
	I0816 22:25:08.639025   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.639186   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
	I0816 22:25:08.639324   10879 sshutil.go:53] new ssh client: &{IP:192.168.116.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa Username:docker}
	I0816 22:25:08.731490   10879 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:25:08.736384   10879 info.go:137] Remote host: Buildroot 2020.02.12
	I0816 22:25:08.736415   10879 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:25:08.736479   10879 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:25:08.736640   10879 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0816 22:25:08.736796   10879 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:25:08.744563   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:25:08.762219   10879 start.go:270] post-start completed in 130.327769ms
	I0816 22:25:08.762269   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:08.762532   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.768066   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.768447   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.768479   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.768580   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
	I0816 22:25:08.768764   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.768937   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.769097   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
	I0816 22:25:08.769278   10879 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:08.769412   10879 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.116.91 22 <nil> <nil>}
	I0816 22:25:08.769423   10879 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0816 22:25:08.908369   10879 main.go:130] libmachine: SSH cmd err, output: <nil>: 1629152708.857933809
	
	I0816 22:25:08.908397   10879 fix.go:212] guest clock: 1629152708.857933809
	I0816 22:25:08.908407   10879 fix.go:225] Guest: 2021-08-16 22:25:08.857933809 +0000 UTC Remote: 2021-08-16 22:25:08.762514681 +0000 UTC m=+14.743694760 (delta=95.419128ms)
	I0816 22:25:08.908465   10879 fix.go:196] guest clock delta is within tolerance: 95.419128ms
	I0816 22:25:08.908473   10879 fix.go:57] fixHost completed within 14.610364111s
	I0816 22:25:08.908483   10879 start.go:80] releasing machines lock for "kubernetes-upgrade-20210816222225-6986", held for 14.610387547s
	I0816 22:25:08.908527   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:08.908801   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetIP
	I0816 22:25:08.914888   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.915258   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.915290   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.915507   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:08.915732   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:08.916309   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:08.916592   10879 ssh_runner.go:149] Run: systemctl --version
	I0816 22:25:08.916617   10879 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:25:08.916626   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.916658   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.923331   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.923688   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.923714   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.923808   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
	I0816 22:25:08.923961   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.924114   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
	I0816 22:25:08.924243   10879 sshutil.go:53] new ssh client: &{IP:192.168.116.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa Username:docker}
	I0816 22:25:08.924528   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.924867   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.924898   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.925049   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
	I0816 22:25:08.925209   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.925407   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
	I0816 22:25:08.925534   10879 sshutil.go:53] new ssh client: &{IP:192.168.116.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa Username:docker}
	I0816 22:25:09.022865   10879 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 22:25:09.023038   10879 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:09.000201   10732 pod_ready.go:102] pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:10.499577   10732 pod_ready.go:92] pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:10.499613   10732 pod_ready.go:81] duration metric: took 3.520603411s waiting for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.499631   10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.508715   10732 pod_ready.go:92] pod "kube-proxy-7l59t" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:10.508738   10732 pod_ready.go:81] duration metric: took 9.098529ms waiting for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.508749   10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.514516   10732 pod_ready.go:92] pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:10.514536   10732 pod_ready.go:81] duration metric: took 5.779042ms waiting for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.514546   10732 pod_ready.go:38] duration metric: took 9.111379533s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:25:10.514567   10732 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:25:10.530219   10732 ops.go:34] apiserver oom_adj: -16
	I0816 22:25:10.530242   10732 kubeadm.go:604] restartCluster took 31.19958524s
	I0816 22:25:10.530251   10732 kubeadm.go:392] StartCluster complete in 31.557512009s
	I0816 22:25:10.530271   10732 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:25:10.530404   10732 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:10.531238   10732 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:25:10.532000   10732 kapi.go:59] client config for pause-20210816222224-6986: &rest.Config{Host:"https://192.168.50.226:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 22:25:10.647656   10732 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210816222224-6986" rescaled to 1
	I0816 22:25:10.647728   10732 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.50.226 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:25:10.647757   10732 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:25:10.647794   10732 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0816 22:25:10.649327   10732 out.go:177] * Verifying Kubernetes components...
	I0816 22:25:10.649398   10732 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:25:10.647852   10732 addons.go:59] Setting storage-provisioner=true in profile "pause-20210816222224-6986"
	I0816 22:25:10.647862   10732 addons.go:59] Setting default-storageclass=true in profile "pause-20210816222224-6986"
	I0816 22:25:10.647991   10732 config.go:177] Loaded profile config "pause-20210816222224-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:25:10.649480   10732 addons.go:135] Setting addon storage-provisioner=true in "pause-20210816222224-6986"
	W0816 22:25:10.649500   10732 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:25:10.649516   10732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210816222224-6986"
	I0816 22:25:10.649532   10732 host.go:66] Checking if "pause-20210816222224-6986" exists ...
	I0816 22:25:10.650748   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:10.650827   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:10.653189   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:10.653249   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:10.664888   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0816 22:25:10.665365   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:10.665893   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:25:10.665915   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:10.666315   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:10.666493   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
	I0816 22:25:10.667827   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34733
	I0816 22:25:10.668293   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:10.668762   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:25:10.668782   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:10.669202   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:10.669761   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:10.669802   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:10.670861   10732 kapi.go:59] client config for pause-20210816222224-6986: &rest.Config{Host:"https://192.168.50.226:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 22:25:10.676486   10732 addons.go:135] Setting addon default-storageclass=true in "pause-20210816222224-6986"
	W0816 22:25:10.676510   10732 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:25:10.676539   10732 host.go:66] Checking if "pause-20210816222224-6986" exists ...
	I0816 22:25:10.676985   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:10.677031   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:10.682317   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39313
	I0816 22:25:10.682805   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:10.683360   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:25:10.683382   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:10.683737   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:10.683924   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
	I0816 22:25:10.687519   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:25:10.693597   10732 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:25:10.693708   10732 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:25:10.693722   10732 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:25:10.693742   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
	I0816 22:25:10.692712   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45043
	I0816 22:25:10.694563   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:10.695082   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:25:10.695103   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:10.695455   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:10.696063   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:10.696115   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:10.700367   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:25:10.700792   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
	I0816 22:25:10.700813   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:25:10.701111   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
	I0816 22:25:10.701350   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
	I0816 22:25:10.701537   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
	I0816 22:25:10.701730   10732 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
	I0816 22:25:10.709887   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33339
	I0816 22:25:10.710304   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:10.710912   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:25:10.710938   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:10.711336   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:10.711547   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
	I0816 22:25:10.714430   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:25:10.714683   10732 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:25:10.714702   10732 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:25:10.714720   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
	I0816 22:25:10.720808   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:25:10.721319   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
	I0816 22:25:10.721342   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:25:10.721485   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
	I0816 22:25:10.721643   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
	I0816 22:25:10.721769   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
	I0816 22:25:10.721919   10732 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
	I0816 22:25:10.832212   10732 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:25:10.862755   10732 node_ready.go:35] waiting up to 6m0s for node "pause-20210816222224-6986" to be "Ready" ...
	I0816 22:25:10.863120   10732 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0816 22:25:10.867110   10732 node_ready.go:49] node "pause-20210816222224-6986" has status "Ready":"True"
	I0816 22:25:10.867130   10732 node_ready.go:38] duration metric: took 4.344058ms waiting for node "pause-20210816222224-6986" to be "Ready" ...
	I0816 22:25:10.867143   10732 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:25:10.883113   10732 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.892065   10732 pod_ready.go:92] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:10.892084   10732 pod_ready.go:81] duration metric: took 8.944517ms waiting for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.892096   10732 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.895462   10732 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:25:11.127716   10732 pod_ready.go:92] pod "etcd-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:11.127749   10732 pod_ready.go:81] duration metric: took 235.644563ms waiting for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:11.127765   10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:11.536655   10732 pod_ready.go:92] pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:11.536676   10732 pod_ready.go:81] duration metric: took 408.901449ms waiting for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:11.536690   10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:11.539596   10732 main.go:130] libmachine: Making call to close driver server
	I0816 22:25:11.539618   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
	I0816 22:25:11.539697   10732 main.go:130] libmachine: Making call to close driver server
	I0816 22:25:11.539725   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
	I0816 22:25:11.540009   10732 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:25:11.540024   10732 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:25:11.540041   10732 main.go:130] libmachine: Making call to close driver server
	I0816 22:25:11.540041   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | Closing plugin on server side
	I0816 22:25:11.540051   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
	I0816 22:25:11.540067   10732 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:25:11.540075   10732 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:25:11.540083   10732 main.go:130] libmachine: Making call to close driver server
	I0816 22:25:11.540092   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
	I0816 22:25:11.540126   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | Closing plugin on server side
	I0816 22:25:11.540298   10732 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:25:11.540310   10732 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:25:11.540320   10732 main.go:130] libmachine: Making call to close driver server
	I0816 22:25:11.540329   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
	I0816 22:25:11.540417   10732 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:25:11.540429   10732 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:25:11.540490   10732 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:25:11.540502   10732 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:25:11.542638   10732 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0816 22:25:11.542662   10732 addons.go:344] enableAddons completed in 894.875902ms
	I0816 22:25:11.931820   10732 pod_ready.go:92] pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:11.931845   10732 pod_ready.go:81] duration metric: took 395.147421ms waiting for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:11.931860   10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:12.329464   10732 pod_ready.go:92] pod "kube-proxy-7l59t" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:12.329493   10732 pod_ready.go:81] duration metric: took 397.623774ms waiting for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:12.329507   10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:12.734335   10732 pod_ready.go:92] pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:12.734360   10732 pod_ready.go:81] duration metric: took 404.844565ms waiting for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:12.734374   10732 pod_ready.go:38] duration metric: took 1.867218741s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:25:12.734394   10732 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:25:12.734439   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:12.754510   10732 api_server.go:70] duration metric: took 2.106745047s to wait for apiserver process to appear ...
	I0816 22:25:12.754540   10732 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:25:12.754553   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:25:12.792067   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 200:
	ok
	I0816 22:25:12.794542   10732 api_server.go:139] control plane version: v1.21.3
	I0816 22:25:12.794565   10732 api_server.go:129] duration metric: took 40.01886ms to wait for apiserver health ...
	I0816 22:25:12.794577   10732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:25:12.941013   10732 system_pods.go:59] 7 kube-system pods found
	I0816 22:25:12.941048   10732 system_pods.go:61] "coredns-558bd4d5db-gkxhz" [5aa76749-775e-423d-bbf9-680a20a27051] Running
	I0816 22:25:12.941053   10732 system_pods.go:61] "etcd-pause-20210816222224-6986" [f621b99e-0604-4bed-8c4e-4f5741e52f7b] Running
	I0816 22:25:12.941057   10732 system_pods.go:61] "kube-apiserver-pause-20210816222224-6986" [b1c46709-4b0b-4c9c-a701-d595a58214ba] Running
	I0816 22:25:12.941102   10732 system_pods.go:61] "kube-controller-manager-pause-20210816222224-6986" [777c035e-5f34-469a-afb5-4f8ef90ccbfb] Running
	I0816 22:25:12.941116   10732 system_pods.go:61] "kube-proxy-7l59t" [3c0e0899-31c1-477a-a6d4-2844091deea2] Running
	I0816 22:25:12.941122   10732 system_pods.go:61] "kube-scheduler-pause-20210816222224-6986" [6b32acf9-8108-45a6-901e-70cd125190f8] Running
	I0816 22:25:12.941136   10732 system_pods.go:61] "storage-provisioner" [4f138dc7-da0e-4775-b4de-b0f7d616b212] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:25:12.941158   10732 system_pods.go:74] duration metric: took 146.575596ms to wait for pod list to return data ...
	I0816 22:25:12.941176   10732 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:25:13.132349   10732 default_sa.go:45] found service account: "default"
	I0816 22:25:13.132381   10732 default_sa.go:55] duration metric: took 191.195172ms for default service account to be created ...
	I0816 22:25:13.132394   10732 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:25:13.340094   10732 system_pods.go:86] 7 kube-system pods found
	I0816 22:25:13.340135   10732 system_pods.go:89] "coredns-558bd4d5db-gkxhz" [5aa76749-775e-423d-bbf9-680a20a27051] Running
	I0816 22:25:13.340146   10732 system_pods.go:89] "etcd-pause-20210816222224-6986" [f621b99e-0604-4bed-8c4e-4f5741e52f7b] Running
	I0816 22:25:13.340155   10732 system_pods.go:89] "kube-apiserver-pause-20210816222224-6986" [b1c46709-4b0b-4c9c-a701-d595a58214ba] Running
	I0816 22:25:13.340163   10732 system_pods.go:89] "kube-controller-manager-pause-20210816222224-6986" [777c035e-5f34-469a-afb5-4f8ef90ccbfb] Running
	I0816 22:25:13.340172   10732 system_pods.go:89] "kube-proxy-7l59t" [3c0e0899-31c1-477a-a6d4-2844091deea2] Running
	I0816 22:25:13.340184   10732 system_pods.go:89] "kube-scheduler-pause-20210816222224-6986" [6b32acf9-8108-45a6-901e-70cd125190f8] Running
	I0816 22:25:13.340196   10732 system_pods.go:89] "storage-provisioner" [4f138dc7-da0e-4775-b4de-b0f7d616b212] Running
	I0816 22:25:13.340210   10732 system_pods.go:126] duration metric: took 207.809217ms to wait for k8s-apps to be running ...
	I0816 22:25:13.340225   10732 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:25:13.340279   10732 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:25:13.358716   10732 system_svc.go:56] duration metric: took 18.47804ms WaitForService to wait for kubelet.
	I0816 22:25:13.358752   10732 kubeadm.go:547] duration metric: took 2.710991068s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:25:13.358785   10732 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:25:13.536797   10732 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:25:13.536830   10732 node_conditions.go:123] node cpu capacity is 2
	I0816 22:25:13.536848   10732 node_conditions.go:105] duration metric: took 178.056493ms to run NodePressure ...
	I0816 22:25:13.536863   10732 start.go:231] waiting for startup goroutines ...
	I0816 22:25:13.602415   10732 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:25:13.604425   10732 out.go:177] * Done! kubectl is now configured to use "pause-20210816222224-6986" cluster and "default" namespace by default
	I0816 22:25:13.045168   10879 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.02209826s)
	I0816 22:25:13.045290   10879 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0816 22:25:13.045383   10879 ssh_runner.go:149] Run: which lz4
	I0816 22:25:13.050542   10879 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0816 22:25:13.055627   10879 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0816 22:25:13.055661   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (945588089 bytes)
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	f04c445038901       6e38f40d628db       5 seconds ago        Running             storage-provisioner       0                   0e10d9862204b
	e70dd80568a0a       296a6d5035e2d       16 seconds ago       Running             coredns                   1                   c649190b7c07d
	2585772c8a261       adb2816ea823a       17 seconds ago       Running             kube-proxy                2                   d73b4cafe25f0
	53780b2759956       3d174f00aa39e       24 seconds ago       Running             kube-apiserver            2                   fb9f201b2c2e1
	76fef890edebe       6be0dc1302e30       24 seconds ago       Running             kube-scheduler            2                   1718d2a0276ce
	69a7fab4848c4       0369cf4303ffd       24 seconds ago       Running             etcd                      2                   3b9459ff3a0d8
	825e79d62718c       bc2bb319a7038       25 seconds ago       Running             kube-controller-manager   2                   feab707eb735a
	7626b842ef886       3d174f00aa39e       25 seconds ago       Created             kube-apiserver            1                   fb9f201b2c2e1
	9d9f34b35e099       adb2816ea823a       25 seconds ago       Created             kube-proxy                1                   d73b4cafe25f0
	97c4cc3614116       6be0dc1302e30       25 seconds ago       Created             kube-scheduler            1                   1718d2a0276ce
	3644e35e40a2f       0369cf4303ffd       25 seconds ago       Created             etcd                      1                   3b9459ff3a0d8
	8c5f2c007cff4       bc2bb319a7038       29 seconds ago       Created             kube-controller-manager   1                   feab707eb735a
	28c7161cd49a4       296a6d5035e2d       About a minute ago   Exited              coredns                   0                   05c2427240818
	a8503bd796d5d       adb2816ea823a       About a minute ago   Exited              kube-proxy                0                   a86c3b6ee3a70
	124fa393359f7       0369cf4303ffd       2 minutes ago        Exited              etcd                      0                   94a493a65b593
	8710cefecdbe5       6be0dc1302e30       2 minutes ago        Exited              kube-scheduler            0                   982e66890a90d
	38dc61b214a9c       3d174f00aa39e       2 minutes ago        Exited              kube-apiserver            0                   630ed9d4644e9
	
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2021-08-16 22:22:35 UTC, end at Mon 2021-08-16 22:25:17 UTC. --
	Aug 16 22:24:53 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:24:53.606374155Z" level=info msg="StartContainer for \"69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549\" returns successfully"
	Aug 16 22:24:53 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:24:53.687984942Z" level=info msg="StartContainer for \"76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1\" returns successfully"
	Aug 16 22:24:59 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:24:59.121993631Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.146522428Z" level=info msg="CreateContainer within sandbox \"d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:2,}"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.231610260Z" level=info msg="CreateContainer within sandbox \"d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8\" for &ContainerMetadata{Name:kube-proxy,Attempt:2,} returns container id \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.233198734Z" level=info msg="StartContainer for \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.443953465Z" level=info msg="StopPodSandbox for \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.444769081Z" level=info msg="Container to stop \"28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.461877463Z" level=info msg="StartContainer for \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\" returns successfully"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536079353Z" level=info msg="TearDown network for sandbox \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\" successfully"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536191167Z" level=info msg="StopPodSandbox for \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\" returns successfully"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536962082Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-gkxhz,Uid:5aa76749-775e-423d-bbf9-680a20a27051,Namespace:kube-system,Attempt:1,}"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.776744568Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb pid=5007
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.290447333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-gkxhz,Uid:5aa76749-775e-423d-bbf9-680a20a27051,Namespace:kube-system,Attempt:1,} returns sandbox id \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\""
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.300600113Z" level=info msg="CreateContainer within sandbox \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.389478760Z" level=info msg="CreateContainer within sandbox \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\""
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.397162604Z" level=info msg="StartContainer for \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\""
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.594046909Z" level=info msg="StartContainer for \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\" returns successfully"
	Aug 16 22:25:11 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:11.852957632Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:4f138dc7-da0e-4775-b4de-b0f7d616b212,Namespace:kube-system,Attempt:0,}"
	Aug 16 22:25:11 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:11.903771908Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c pid=5174
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.439549893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:4f138dc7-da0e-4775-b4de-b0f7d616b212,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\""
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.451930506Z" level=info msg="CreateContainer within sandbox \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.521875733Z" level=info msg="CreateContainer within sandbox \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.523292924Z" level=info msg="StartContainer for \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.851898064Z" level=info msg="StartContainer for \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\" returns successfully"
	
	* 
	* ==> coredns [28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	I0816 22:24:19.170128       1 trace.go:205] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.168) (total time: 30001ms):
	Trace[2019727887]: [30.001909435s] [30.001909435s] END
	E0816 22:24:19.170279       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0816 22:24:19.171047       1 trace.go:205] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.166) (total time: 30004ms):
	Trace[939984059]: [30.004733433s] [30.004733433s] END
	E0816 22:24:19.171149       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0816 22:24:19.171258       1 trace.go:205] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.166) (total time: 30004ms):
	Trace[911902081]: [30.004945736s] [30.004945736s] END
	E0816 22:24:19.171265       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> coredns [e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7ae91e86dd75dee9ae501cb58003198b
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210816222224-6986
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20210816222224-6986
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=pause-20210816222224-6986
	                    minikube.k8s.io/updated_at=2021_08_16T22_23_26_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Aug 2021 22:23:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210816222224-6986
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Aug 2021 22:25:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Aug 2021 22:24:59 +0000   Mon, 16 Aug 2021 22:23:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Aug 2021 22:24:59 +0000   Mon, 16 Aug 2021 22:23:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Aug 2021 22:24:59 +0000   Mon, 16 Aug 2021 22:23:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Aug 2021 22:24:59 +0000   Mon, 16 Aug 2021 22:23:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.226
	  Hostname:    pause-20210816222224-6986
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2033044Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2033044Ki
	  pods:               110
	System Info:
	  Machine ID:                 940ad300f94c41e2a0b0cde81be11541
	  System UUID:                940ad300-f94c-41e2-a0b0-cde81be11541
	  Boot ID:                    ea001a4b-e783-4f93-b7d3-bb910eb45d3c
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-gkxhz                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     92s
	  kube-system                 etcd-pause-20210816222224-6986                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         114s
	  kube-system                 kube-apiserver-pause-20210816222224-6986             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         112s
	  kube-system                 kube-controller-manager-pause-20210816222224-6986    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 kube-proxy-7l59t                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         92s
	  kube-system                 kube-scheduler-pause-20210816222224-6986             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         106s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  2m6s (x6 over 2m7s)  kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x5 over 2m7s)  kubelet     Node pause-20210816222224-6986 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x5 over 2m7s)  kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientPID
	  Normal  Starting                 106s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  106s                 kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s                 kubelet     Node pause-20210816222224-6986 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s                 kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  106s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                96s                  kubelet     Node pause-20210816222224-6986 status is now: NodeReady
	  Normal  Starting                 89s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 27s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  27s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  26s (x8 over 27s)    kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 27s)    kubelet     Node pause-20210816222224-6986 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 27s)    kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientPID
	  Normal  Starting                 18s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +3.181431] systemd-fstab-generator[1161]: Ignoring "noauto" for root device
	[  +0.036573] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.985023] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1731 comm=systemd-network
	[  +1.088197] vboxguest: loading out-of-tree module taints kernel.
	[  +0.006251] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.889854] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[ +16.286436] systemd-fstab-generator[2098]: Ignoring "noauto" for root device
	[  +0.258185] systemd-fstab-generator[2128]: Ignoring "noauto" for root device
	[  +0.135377] systemd-fstab-generator[2143]: Ignoring "noauto" for root device
	[  +0.180446] systemd-fstab-generator[2173]: Ignoring "noauto" for root device
	[Aug16 22:23] systemd-fstab-generator[2381]: Ignoring "noauto" for root device
	[ +20.504547] systemd-fstab-generator[2808]: Ignoring "noauto" for root device
	[ +20.717915] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.551219] kauditd_printk_skb: 104 callbacks suppressed
	[Aug16 22:24] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.792051] systemd-fstab-generator[3754]: Ignoring "noauto" for root device
	[  +0.176916] systemd-fstab-generator[3767]: Ignoring "noauto" for root device
	[  +0.230657] systemd-fstab-generator[3792]: Ignoring "noauto" for root device
	[  +4.083098] kauditd_printk_skb: 2 callbacks suppressed
	[  +3.840195] NFSD: Unable to end grace period: -110
	[  +4.324119] systemd-fstab-generator[4543]: Ignoring "noauto" for root device
	[  +6.680726] kauditd_printk_skb: 29 callbacks suppressed
	[Aug16 22:25] kauditd_printk_skb: 14 callbacks suppressed
	[ +12.641213] kauditd_printk_skb: 23 callbacks suppressed
	
	* 
	* ==> etcd [124fa393359f758ea47161b345d2cab4ce486d4473a4caad483449464d44315f] <==
	* 2021-08-16 22:23:41.064197 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210816222224-6986\" " with result "range_response_count:1 size:5052" took too long (6.421187445s) to execute
	2021-08-16 22:23:41.065847 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:341" took too long (6.446897155s) to execute
	2021-08-16 22:23:41.066285 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/node-controller\" " with result "range_response_count:1 size:242" took too long (5.09674902s) to execute
	2021-08-16 22:23:41.068005 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (6.28196539s) to execute
	2021-08-16 22:23:41.068259 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (763.710719ms) to execute
	2021-08-16 22:23:41.880435 W | etcdserver: read-only range request "key:\"/registry/masterleases/192.168.50.226\" " with result "range_response_count:0 size:5" took too long (776.335267ms) to execute
	2021-08-16 22:23:41.881080 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (597.366064ms) to execute
	2021-08-16 22:23:41.882354 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:4569" took too long (763.841142ms) to execute
	2021-08-16 22:23:41.883287 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (621.677263ms) to execute
	2021-08-16 22:23:41.884722 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (481.499599ms) to execute
	2021-08-16 22:23:41.885189 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-pause-20210816222224-6986\" " with result "range_response_count:1 size:5421" took too long (772.180278ms) to execute
	2021-08-16 22:23:42.453217 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (290.061418ms) to execute
	2021-08-16 22:23:42.455427 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (285.893643ms) to execute
	2021-08-16 22:23:42.456943 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-pause-20210816222224-6986\" " with result "range_response_count:1 size:6314" took too long (153.946258ms) to execute
	2021-08-16 22:23:42.458024 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (177.825431ms) to execute
	2021-08-16 22:23:44.267832 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:23:54.092150 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (701.802797ms) to execute
	2021-08-16 22:23:54.093518 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-ctgf5\" " with result "range_response_count:1 size:4473" took too long (1.090386256s) to execute
	2021-08-16 22:23:54.267392 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:23:57.768234 W | etcdserver: request "header:<ID:4263355585347158035 > lease_revoke:<id:3b2a7b510fcb7e67>" with result "size:29" took too long (771.90226ms) to execute
	2021-08-16 22:23:57.768903 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (374.444829ms) to execute
	2021-08-16 22:23:57.769379 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-ctgf5\" " with result "range_response_count:1 size:4473" took too long (765.115046ms) to execute
	2021-08-16 22:24:04.267548 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:24:14.267958 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:24:24.268321 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> etcd [3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423] <==
	* 
	* ==> etcd [69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549] <==
	* 2021-08-16 22:24:53.773065 W | auth: simple token is not cryptographically signed
	2021-08-16 22:24:53.837118 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	raft2021/08/16 22:24:53 INFO: e840193bf29c3b2a switched to configuration voters=(16735403960572853034)
	2021-08-16 22:24:53.849298 I | etcdserver/membership: added member e840193bf29c3b2a [https://192.168.50.226:2380] to cluster 99b90e1bea73c730
	2021-08-16 22:24:53.860198 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-16 22:24:53.864997 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-16 22:24:53.865214 I | embed: listening for peers on 192.168.50.226:2380
	2021-08-16 22:24:53.868083 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-16 22:24:53.871735 I | etcdserver/api: enabled capabilities for version 3.4
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a is starting a new election at term 2
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a became candidate at term 3
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a received MsgVoteResp from e840193bf29c3b2a at term 3
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a became leader at term 3
	raft2021/08/16 22:24:54 INFO: raft.node: e840193bf29c3b2a elected leader e840193bf29c3b2a at term 3
	2021-08-16 22:24:54.968820 I | embed: ready to serve client requests
	2021-08-16 22:24:54.969394 I | etcdserver: published {Name:pause-20210816222224-6986 ClientURLs:[https://192.168.50.226:2379]} to cluster 99b90e1bea73c730
	2021-08-16 22:24:54.971284 I | embed: serving client requests on 192.168.50.226:2379
	2021-08-16 22:24:54.971462 I | embed: ready to serve client requests
	2021-08-16 22:24:54.973508 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-16 22:25:03.067902 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-gkxhz\" " with result "range_response_count:1 size:4860" took too long (140.807991ms) to execute
	2021-08-16 22:25:06.747736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:25:08.138740 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:25:10.645123 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:3838" took too long (108.124514ms) to execute
	2021-08-16 22:25:10.645989 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:665" took too long (107.967343ms) to execute
	2021-08-16 22:25:18.137756 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  22:25:18 up 2 min,  0 users,  load average: 3.35, 1.58, 0.61
	Linux pause-20210816222224-6986 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [38dc61b214a9cbd019de4ca9ab52fb6baf728336de6d715df22b027522ad8b20] <==
	* I0816 22:23:41.890272       1 trace.go:205] Trace[914939944]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:23:41.117) (total time: 772ms):
	Trace[914939944]: [772.29448ms] [772.29448ms] END
	I0816 22:23:41.897880       1 trace.go:205] Trace[372773048]: "List" url:/api/v1/nodes,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/system:serviceaccount:kube-system:node-controller,client:192.168.50.226,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:41.117) (total time: 780ms):
	Trace[372773048]: ---"Listing from storage done" 773ms (22:23:00.891)
	Trace[372773048]: [780.024685ms] [780.024685ms] END
	I0816 22:23:41.899245       1 trace.go:205] Trace[189474875]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-20210816222224-6986,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.50.226,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (16-Aug-2021 22:23:41.107) (total time: 791ms):
	Trace[189474875]: ---"About to write a response" 791ms (22:23:00.899)
	Trace[189474875]: [791.769473ms] [791.769473ms] END
	I0816 22:23:41.914143       1 trace.go:205] Trace[1803257945]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (16-Aug-2021 22:23:41.101) (total time: 812ms):
	Trace[1803257945]: ---"initial value restored" 795ms (22:23:00.897)
	Trace[1803257945]: [812.099383ms] [812.099383ms] END
	I0816 22:23:46.219827       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0816 22:23:46.322056       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0816 22:23:54.101003       1 trace.go:205] Trace[1429856954]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-ctgf5,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.50.1,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:53.002) (total time: 1098ms):
	Trace[1429856954]: ---"About to write a response" 1098ms (22:23:00.100)
	Trace[1429856954]: [1.0988209s] [1.0988209s] END
	I0816 22:23:56.194218       1 client.go:360] parsed scheme: "passthrough"
	I0816 22:23:56.194943       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 22:23:56.195388       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0816 22:23:57.770900       1 trace.go:205] Trace[2103117378]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-ctgf5,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.50.1,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:57.002) (total time: 767ms):
	Trace[2103117378]: ---"About to write a response" 767ms (22:23:00.770)
	Trace[2103117378]: [767.944134ms] [767.944134ms] END
	I0816 22:24:32.818404       1 client.go:360] parsed scheme: "passthrough"
	I0816 22:24:32.818597       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 22:24:32.818691       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-apiserver [53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf] <==
	* I0816 22:24:59.052878       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0816 22:24:59.052897       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0816 22:24:59.071128       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0816 22:24:59.071704       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0816 22:24:59.072328       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0816 22:24:59.072872       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
	I0816 22:24:59.173327       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0816 22:24:59.176720       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	E0816 22:24:59.181278       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0816 22:24:59.206356       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0816 22:24:59.225165       1 cache.go:39] Caches are synced for autoregister controller
	I0816 22:24:59.227741       1 apf_controller.go:299] Running API Priority and Fairness config worker
	I0816 22:24:59.230223       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0816 22:24:59.244026       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 22:24:59.248943       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0816 22:25:00.021310       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0816 22:25:00.022052       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0816 22:25:00.034218       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0816 22:25:01.108795       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0816 22:25:01.182177       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0816 22:25:01.279321       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0816 22:25:01.344553       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 22:25:01.382891       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 22:25:11.471022       1 controller.go:611] quota admission added evaluator for: endpoints
	I0816 22:25:13.002505       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612] <==
	* 
	* ==> kube-controller-manager [825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7] <==
	* I0816 22:25:12.900492       1 shared_informer.go:247] Caches are synced for GC 
	I0816 22:25:12.900735       1 shared_informer.go:247] Caches are synced for job 
	I0816 22:25:12.908539       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0816 22:25:12.910182       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0816 22:25:12.925990       1 shared_informer.go:247] Caches are synced for stateful set 
	I0816 22:25:12.926195       1 shared_informer.go:247] Caches are synced for HPA 
	I0816 22:25:12.931999       1 shared_informer.go:247] Caches are synced for attach detach 
	I0816 22:25:12.933971       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0816 22:25:12.934151       1 shared_informer.go:247] Caches are synced for deployment 
	I0816 22:25:12.943776       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0816 22:25:12.963727       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0816 22:25:12.969209       1 shared_informer.go:247] Caches are synced for taint 
	I0816 22:25:12.969381       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	W0816 22:25:12.969524       1 node_lifecycle_controller.go:1013] Missing timestamp for Node pause-20210816222224-6986. Assuming now as a timestamp.
	I0816 22:25:12.969564       1 node_lifecycle_controller.go:1214] Controller detected that zone  is now in state Normal.
	I0816 22:25:12.970457       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0816 22:25:12.970831       1 event.go:291] "Event occurred" object="pause-20210816222224-6986" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20210816222224-6986 event: Registered Node pause-20210816222224-6986 in Controller"
	I0816 22:25:12.974749       1 shared_informer.go:247] Caches are synced for endpoint 
	I0816 22:25:13.000548       1 shared_informer.go:247] Caches are synced for disruption 
	I0816 22:25:13.000739       1 disruption.go:371] Sending events to api server.
	I0816 22:25:13.004608       1 shared_informer.go:247] Caches are synced for resource quota 
	I0816 22:25:13.016848       1 shared_informer.go:247] Caches are synced for resource quota 
	I0816 22:25:13.386564       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0816 22:25:13.386597       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0816 22:25:13.440139       1 shared_informer.go:247] Caches are synced for garbage collector 
	
	* 
	* ==> kube-controller-manager [8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4] <==
	* 
	* ==> kube-proxy [2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2] <==
	* I0816 22:25:00.641886       1 node.go:172] Successfully retrieved node IP: 192.168.50.226
	I0816 22:25:00.641938       1 server_others.go:140] Detected node IP 192.168.50.226
	W0816 22:25:00.642012       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0816 22:25:00.805515       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0816 22:25:00.805539       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0816 22:25:00.805560       1 server_others.go:212] Using iptables Proxier.
	I0816 22:25:00.806059       1 server.go:643] Version: v1.21.3
	I0816 22:25:00.807251       1 config.go:315] Starting service config controller
	I0816 22:25:00.807281       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:25:00.807307       1 config.go:224] Starting endpoint slice config controller
	I0816 22:25:00.807313       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:25:00.812511       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 22:25:00.816722       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:25:00.907844       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 22:25:00.907906       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d] <==
	* 
	* ==> kube-proxy [a8503bd796d5d979a6e1b8b5154986e8b77de391b4f091211451ea5f52808e52] <==
	* I0816 22:23:49.316430       1 node.go:172] Successfully retrieved node IP: 192.168.50.226
	I0816 22:23:49.316608       1 server_others.go:140] Detected node IP 192.168.50.226
	W0816 22:23:49.316822       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0816 22:23:49.402698       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0816 22:23:49.403462       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0816 22:23:49.404047       1 server_others.go:212] Using iptables Proxier.
	I0816 22:23:49.407950       1 server.go:643] Version: v1.21.3
	I0816 22:23:49.410864       1 config.go:315] Starting service config controller
	I0816 22:23:49.413112       1 config.go:224] Starting endpoint slice config controller
	I0816 22:23:49.419474       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:23:49.421254       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:23:49.413718       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:23:49.425958       1 shared_informer.go:247] Caches are synced for service config 
	W0816 22:23:49.425586       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:23:49.520425       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1] <==
	* I0816 22:24:54.634243       1 serving.go:347] Generated self-signed cert in-memory
	W0816 22:24:59.095457       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 22:24:59.098028       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 22:24:59.098491       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 22:24:59.098734       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 22:24:59.166481       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0816 22:24:59.178395       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:24:59.177851       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0816 22:24:59.194249       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:24:59.304036       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [8710cefecdbe5d31cd44e9ae3378bc08cbc56001326a1cb38026755196cac7d1] <==
	* E0816 22:23:21.172468       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:23:21.189536       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:23:21.300836       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.329219       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.448607       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:23:21.504104       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.504531       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:23:21.597849       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:23:21.612843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:23:21.671333       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:23:21.827198       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:23:21.852843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:23:21.867015       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.910139       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:23:23.291774       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.356078       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.452841       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:23:23.464942       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:23:23.644764       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:23:23.649142       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.710606       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.980099       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:23:24.052112       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:23:24.168543       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0816 22:23:30.043826       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a] <==
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:22:35 UTC, end at Mon 2021-08-16 22:25:19 UTC. --
	Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.514985    4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
	Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.616076    4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
	Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.718006    4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
	Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.819104    4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
	Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.919357    4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:59.020392    4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.121233    4551 kuberuntime_manager.go:1044] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.122462    4551 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.228577    4551 kubelet_node_status.go:109] "Node was previously registered" node="pause-20210816222224-6986"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.228853    4551 kubelet_node_status.go:74] "Successfully registered node" node="pause-20210816222224-6986"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.536346    4551 apiserver.go:52] "Watching apiserver"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.540959    4551 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.541581    4551 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.609734    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c0e0899-31c1-477a-a6d4-2844091deea2-kube-proxy\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.610130    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c0e0899-31c1-477a-a6d4-2844091deea2-xtables-lock\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.610271    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c0e0899-31c1-477a-a6d4-2844091deea2-lib-modules\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.610503    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2grh\" (UniqueName: \"kubernetes.io/projected/3c0e0899-31c1-477a-a6d4-2844091deea2-kube-api-access-b2grh\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.711424    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgpd2\" (UniqueName: \"kubernetes.io/projected/5aa76749-775e-423d-bbf9-680a20a27051-kube-api-access-rgpd2\") pod \"coredns-558bd4d5db-gkxhz\" (UID: \"5aa76749-775e-423d-bbf9-680a20a27051\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.712578    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5aa76749-775e-423d-bbf9-680a20a27051-config-volume\") pod \"coredns-558bd4d5db-gkxhz\" (UID: \"5aa76749-775e-423d-bbf9-680a20a27051\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.713123    4551 reconciler.go:157] "Reconciler: start to sync state"
	Aug 16 22:25:00 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:00.142816    4551 scope.go:111] "RemoveContainer" containerID="9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d"
	Aug 16 22:25:03 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:03.115940    4551 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Aug 16 22:25:11 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:11.548694    4551 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:25:11 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:11.620746    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4f138dc7-da0e-4775-b4de-b0f7d616b212-tmp\") pod \"storage-provisioner\" (UID: \"4f138dc7-da0e-4775-b4de-b0f7d616b212\") "
	Aug 16 22:25:11 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:11.621027    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7pzn\" (UniqueName: \"kubernetes.io/projected/4f138dc7-da0e-4775-b4de-b0f7d616b212-kube-api-access-n7pzn\") pod \"storage-provisioner\" (UID: \"4f138dc7-da0e-4775-b4de-b0f7d616b212\") "
	
	* 
	* ==> storage-provisioner [f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a] <==
	* I0816 22:25:12.920503       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 22:25:12.958814       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 22:25:12.959432       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0816 22:25:13.028463       1 leaderelection.go:361] Failed to update lock: Operation cannot be fulfilled on endpoints "k8s.io-minikube-hostpath": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/kube-system/k8s.io-minikube-hostpath, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 3f27bbad-30a1-4386-9d09-80525f79ada9, UID in object meta: 
	I0816 22:25:16.530709       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 22:25:16.540393       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_pause-20210816222224-6986_409bd634-6095-4f9a-ab3f-09a5e699e184!
	I0816 22:25:16.544131       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"32edeef2-57a3-43b1-a3d9-e7ecc2ed1a14", APIVersion:"v1", ResourceVersion:"582", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' pause-20210816222224-6986_409bd634-6095-4f9a-ab3f-09a5e699e184 became leader
	I0816 22:25:16.647143       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_pause-20210816222224-6986_409bd634-6095-4f9a-ab3f-09a5e699e184!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:25:18.365377   11348 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:25:18Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log\": lstat /var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:25:18Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log\\\": lstat /var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:25:18.618219   11348 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:25:18Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log\": lstat /var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:25:18Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log\\\": lstat /var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:25:18.731515   11348 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:25:18Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log\": lstat /var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:25:18Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log\\\": lstat /var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:25:18.825217   11348 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:25:18Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log\": lstat /var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:25:18Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log\\\": lstat /var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:25:19.031711   11348 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:25:19Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log\": lstat /var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:25:19Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log\\\": lstat /var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log: no such file or directory\"\n\n** /stderr **"
	! unable to fetch logs for: etcd [3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423], kube-apiserver [7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612], kube-controller-manager [8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4], kube-proxy [9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d], kube-scheduler [97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a]

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (50.84s)

                                                
                                    
x
+
TestPause/serial/Pause (36.64s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210816222224-6986 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/Pause
pause_test.go:107: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-20210816222224-6986 --alsologtostderr -v=5: exit status 80 (9.143414576s)

                                                
                                                
-- stdout --
	* Pausing node pause-20210816222224-6986 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 22:25:19.202708   11374 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:25:19.203635   11374 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:25:19.203651   11374 out.go:311] Setting ErrFile to fd 2...
	I0816 22:25:19.203656   11374 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:25:19.203796   11374 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:25:19.204036   11374 out.go:305] Setting JSON to false
	I0816 22:25:19.204061   11374 mustload.go:65] Loading cluster: pause-20210816222224-6986
	I0816 22:25:19.204438   11374 config.go:177] Loaded profile config "pause-20210816222224-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:25:19.204992   11374 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:19.205059   11374 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:19.217142   11374 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41109
	I0816 22:25:19.217545   11374 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:19.218149   11374 main.go:130] libmachine: Using API Version  1
	I0816 22:25:19.218167   11374 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:19.219118   11374 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:19.219470   11374 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
	I0816 22:25:19.223263   11374 host.go:66] Checking if "pause-20210816222224-6986" exists ...
	I0816 22:25:19.223720   11374 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:19.223761   11374 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:19.241023   11374 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33661
	I0816 22:25:19.241420   11374 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:19.242345   11374 main.go:130] libmachine: Using API Version  1
	I0816 22:25:19.242365   11374 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:19.242709   11374 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:19.242901   11374 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:25:19.243616   11374 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-20210816222224-6986 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0816 22:25:19.246141   11374 out.go:177] * Pausing node pause-20210816222224-6986 ... 
	I0816 22:25:19.246166   11374 host.go:66] Checking if "pause-20210816222224-6986" exists ...
	I0816 22:25:19.246492   11374 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:19.246534   11374 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:19.258286   11374 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40399
	I0816 22:25:19.258736   11374 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:19.259269   11374 main.go:130] libmachine: Using API Version  1
	I0816 22:25:19.259291   11374 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:19.259665   11374 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:19.259844   11374 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:25:19.260039   11374 ssh_runner.go:149] Run: systemctl --version
	I0816 22:25:19.260065   11374 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
	I0816 22:25:19.266482   11374 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:25:19.267087   11374 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
	I0816 22:25:19.267123   11374 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
	I0816 22:25:19.267160   11374 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:25:19.267308   11374 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
	I0816 22:25:19.267452   11374 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
	I0816 22:25:19.267554   11374 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
	I0816 22:25:19.378076   11374 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:25:19.393534   11374 pause.go:50] kubelet running: true
	I0816 22:25:19.393615   11374 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:25:19.654456   11374 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:25:19.654581   11374 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:25:19.819302   11374 cri.go:76] found id: "f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a"
	I0816 22:25:19.819330   11374 cri.go:76] found id: "e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a"
	I0816 22:25:19.819334   11374 cri.go:76] found id: "2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2"
	I0816 22:25:19.819338   11374 cri.go:76] found id: "53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf"
	I0816 22:25:19.819342   11374 cri.go:76] found id: "76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1"
	I0816 22:25:19.819345   11374 cri.go:76] found id: "69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549"
	I0816 22:25:19.819351   11374 cri.go:76] found id: "825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7"
	I0816 22:25:19.819356   11374 cri.go:76] found id: "7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612"
	I0816 22:25:19.819361   11374 cri.go:76] found id: "9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d"
	I0816 22:25:19.819371   11374 cri.go:76] found id: "97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a"
	I0816 22:25:19.819377   11374 cri.go:76] found id: "3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423"
	I0816 22:25:19.819383   11374 cri.go:76] found id: "8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4"
	I0816 22:25:19.819398   11374 cri.go:76] found id: "28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0"
	I0816 22:25:19.819404   11374 cri.go:76] found id: "a8503bd796d5d979a6e1b8b5154986e8b77de391b4f091211451ea5f52808e52"
	I0816 22:25:19.819414   11374 cri.go:76] found id: "124fa393359f758ea47161b345d2cab4ce486d4473a4caad483449464d44315f"
	I0816 22:25:19.819420   11374 cri.go:76] found id: "8710cefecdbe5d31cd44e9ae3378bc08cbc56001326a1cb38026755196cac7d1"
	I0816 22:25:19.819430   11374 cri.go:76] found id: "38dc61b214a9cbd019de4ca9ab52fb6baf728336de6d715df22b027522ad8b20"
	I0816 22:25:19.819445   11374 cri.go:76] found id: ""
	I0816 22:25:19.819497   11374 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0816 22:25:19.861299   11374 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c","pid":5197,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c/rootfs","created":"2021-08-16T22:25:12.066223998Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_4f138dc7-da0e-4775-b4de-b0f7d616b212"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d","pid":4260,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1718d2a0276cefe490a041b714377b70ea31
374bde370f898789e3a342438c2d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d/rootfs","created":"2021-08-16T22:24:38.416064875Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2","pid":4868,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2/rootfs","created":"2021-08-16T22:25:00.408067872Z","annotations":{"io.kubernetes.cri.container-name":"kube-prox
y","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb","pid":4355,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb/rootfs","created":"2021-08-16T22:24:38.822262087Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf","pid":4782,"stat
us":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf/rootfs","created":"2021-08-16T22:24:53.416372678Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549","pid":4797,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549/rootfs","created":"2021-08-16T22:24:53.509902629Z","annotations":{"io.kubernetes.cri.con
tainer-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1","pid":4789,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1/rootfs","created":"2021-08-16T22:24:53.511150294Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7","pid":4679,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v
2.task/k8s.io/825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7/rootfs","created":"2021-08-16T22:24:52.973956456Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb","pid":5035,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb/rootfs","created":"2021-08-16T22:25:00.913945951Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":
"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-gkxhz_5aa76749-775e-423d-bbf9-680a20a27051"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8","pid":4358,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8/rootfs","created":"2021-08-16T22:24:39.327096963Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e70dd80568a0a134cd147b42c9c85b176b8e57570
012074e1f92a3b1a94bab9a","pid":5094,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a/rootfs","created":"2021-08-16T22:25:01.539181689Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a","pid":5251,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a/rootfs","created":"2021-08-16T22:25:12.818349297Z","an
notations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a","pid":4383,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a/rootfs","created":"2021-08-16T22:24:39.213036413Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"feab70
7eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03","pid":4283,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03/rootfs","created":"2021-08-16T22:24:38.475165162Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c"},"owner":"root"}]
	I0816 22:25:19.861614   11374 cri.go:113] list returned 14 containers
	I0816 22:25:19.861634   11374 cri.go:116] container: {ID:0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c Status:running}
	I0816 22:25:19.861654   11374 cri.go:118] skipping 0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c - not in ps
	I0816 22:25:19.861660   11374 cri.go:116] container: {ID:1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d Status:running}
	I0816 22:25:19.861671   11374 cri.go:118] skipping 1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d - not in ps
	I0816 22:25:19.861681   11374 cri.go:116] container: {ID:2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2 Status:running}
	I0816 22:25:19.861688   11374 cri.go:116] container: {ID:3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb Status:running}
	I0816 22:25:19.861703   11374 cri.go:118] skipping 3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb - not in ps
	I0816 22:25:19.861725   11374 cri.go:116] container: {ID:53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf Status:running}
	I0816 22:25:19.861743   11374 cri.go:116] container: {ID:69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549 Status:running}
	I0816 22:25:19.861760   11374 cri.go:116] container: {ID:76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1 Status:running}
	I0816 22:25:19.861768   11374 cri.go:116] container: {ID:825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7 Status:running}
	I0816 22:25:19.861778   11374 cri.go:116] container: {ID:c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb Status:running}
	I0816 22:25:19.861790   11374 cri.go:118] skipping c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb - not in ps
	I0816 22:25:19.861796   11374 cri.go:116] container: {ID:d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8 Status:running}
	I0816 22:25:19.861809   11374 cri.go:118] skipping d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8 - not in ps
	I0816 22:25:19.861819   11374 cri.go:116] container: {ID:e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a Status:running}
	I0816 22:25:19.861827   11374 cri.go:116] container: {ID:f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a Status:running}
	I0816 22:25:19.861837   11374 cri.go:116] container: {ID:fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a Status:running}
	I0816 22:25:19.861845   11374 cri.go:118] skipping fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a - not in ps
	I0816 22:25:19.861852   11374 cri.go:116] container: {ID:feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03 Status:running}
	I0816 22:25:19.861861   11374 cri.go:118] skipping feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03 - not in ps
	I0816 22:25:19.861920   11374 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2
	I0816 22:25:19.887592   11374 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2 53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf
	I0816 22:25:19.934132   11374 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2 53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:25:19Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0816 22:25:20.210603   11374 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:25:20.228078   11374 pause.go:50] kubelet running: false
	I0816 22:25:20.228146   11374 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:25:20.430201   11374 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:25:20.430282   11374 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:25:20.630366   11374 cri.go:76] found id: "f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a"
	I0816 22:25:20.630407   11374 cri.go:76] found id: "e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a"
	I0816 22:25:20.630414   11374 cri.go:76] found id: "2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2"
	I0816 22:25:20.630420   11374 cri.go:76] found id: "53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf"
	I0816 22:25:20.630428   11374 cri.go:76] found id: "76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1"
	I0816 22:25:20.630434   11374 cri.go:76] found id: "69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549"
	I0816 22:25:20.630439   11374 cri.go:76] found id: "825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7"
	I0816 22:25:20.630445   11374 cri.go:76] found id: "7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612"
	I0816 22:25:20.630450   11374 cri.go:76] found id: "9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d"
	I0816 22:25:20.630460   11374 cri.go:76] found id: "97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a"
	I0816 22:25:20.630469   11374 cri.go:76] found id: "3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423"
	I0816 22:25:20.630475   11374 cri.go:76] found id: "8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4"
	I0816 22:25:20.630483   11374 cri.go:76] found id: "28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0"
	I0816 22:25:20.630490   11374 cri.go:76] found id: "a8503bd796d5d979a6e1b8b5154986e8b77de391b4f091211451ea5f52808e52"
	I0816 22:25:20.630499   11374 cri.go:76] found id: "124fa393359f758ea47161b345d2cab4ce486d4473a4caad483449464d44315f"
	I0816 22:25:20.630508   11374 cri.go:76] found id: "8710cefecdbe5d31cd44e9ae3378bc08cbc56001326a1cb38026755196cac7d1"
	I0816 22:25:20.630514   11374 cri.go:76] found id: "38dc61b214a9cbd019de4ca9ab52fb6baf728336de6d715df22b027522ad8b20"
	I0816 22:25:20.630520   11374 cri.go:76] found id: ""
	I0816 22:25:20.630567   11374 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0816 22:25:20.679833   11374 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c","pid":5197,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c/rootfs","created":"2021-08-16T22:25:12.066223998Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_4f138dc7-da0e-4775-b4de-b0f7d616b212"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d","pid":4260,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1718d2a0276cefe490a041b714377b70ea31
374bde370f898789e3a342438c2d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d/rootfs","created":"2021-08-16T22:24:38.416064875Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2","pid":4868,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2/rootfs","created":"2021-08-16T22:25:00.408067872Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy
","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb","pid":4355,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb/rootfs","created":"2021-08-16T22:24:38.822262087Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf","pid":4782,"statu
s":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf/rootfs","created":"2021-08-16T22:24:53.416372678Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549","pid":4797,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549/rootfs","created":"2021-08-16T22:24:53.509902629Z","annotations":{"io.kubernetes.cri.cont
ainer-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1","pid":4789,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1/rootfs","created":"2021-08-16T22:24:53.511150294Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7","pid":4679,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2
.task/k8s.io/825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7/rootfs","created":"2021-08-16T22:24:52.973956456Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb","pid":5035,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb/rootfs","created":"2021-08-16T22:25:00.913945951Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"
c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-gkxhz_5aa76749-775e-423d-bbf9-680a20a27051"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8","pid":4358,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8/rootfs","created":"2021-08-16T22:24:39.327096963Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e70dd80568a0a134cd147b42c9c85b176b8e575700
12074e1f92a3b1a94bab9a","pid":5094,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a/rootfs","created":"2021-08-16T22:25:01.539181689Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a","pid":5251,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a/rootfs","created":"2021-08-16T22:25:12.818349297Z","ann
otations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a","pid":4383,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a/rootfs","created":"2021-08-16T22:24:39.213036413Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"feab707
eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03","pid":4283,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03/rootfs","created":"2021-08-16T22:24:38.475165162Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c"},"owner":"root"}]
	I0816 22:25:20.680071   11374 cri.go:113] list returned 14 containers
	I0816 22:25:20.680087   11374 cri.go:116] container: {ID:0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c Status:running}
	I0816 22:25:20.680101   11374 cri.go:118] skipping 0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c - not in ps
	I0816 22:25:20.680109   11374 cri.go:116] container: {ID:1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d Status:running}
	I0816 22:25:20.680116   11374 cri.go:118] skipping 1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d - not in ps
	I0816 22:25:20.680122   11374 cri.go:116] container: {ID:2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2 Status:paused}
	I0816 22:25:20.680130   11374 cri.go:122] skipping {2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2 paused}: state = "paused", want "running"
	I0816 22:25:20.680144   11374 cri.go:116] container: {ID:3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb Status:running}
	I0816 22:25:20.680166   11374 cri.go:118] skipping 3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb - not in ps
	I0816 22:25:20.680172   11374 cri.go:116] container: {ID:53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf Status:running}
	I0816 22:25:20.680179   11374 cri.go:116] container: {ID:69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549 Status:running}
	I0816 22:25:20.680185   11374 cri.go:116] container: {ID:76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1 Status:running}
	I0816 22:25:20.680192   11374 cri.go:116] container: {ID:825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7 Status:running}
	I0816 22:25:20.680197   11374 cri.go:116] container: {ID:c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb Status:running}
	I0816 22:25:20.680204   11374 cri.go:118] skipping c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb - not in ps
	I0816 22:25:20.680209   11374 cri.go:116] container: {ID:d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8 Status:running}
	I0816 22:25:20.680216   11374 cri.go:118] skipping d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8 - not in ps
	I0816 22:25:20.680221   11374 cri.go:116] container: {ID:e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a Status:running}
	I0816 22:25:20.680227   11374 cri.go:116] container: {ID:f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a Status:running}
	I0816 22:25:20.680233   11374 cri.go:116] container: {ID:fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a Status:running}
	I0816 22:25:20.680239   11374 cri.go:118] skipping fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a - not in ps
	I0816 22:25:20.680245   11374 cri.go:116] container: {ID:feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03 Status:running}
	I0816 22:25:20.680252   11374 cri.go:118] skipping feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03 - not in ps
	I0816 22:25:20.680299   11374 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf
	I0816 22:25:20.707388   11374 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf 69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549
	I0816 22:25:20.738082   11374 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf 69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:25:20Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0816 22:25:21.278590   11374 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:25:21.293848   11374 pause.go:50] kubelet running: false
	I0816 22:25:21.293903   11374 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:25:21.537796   11374 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:25:21.537897   11374 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:25:21.713483   11374 cri.go:76] found id: "f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a"
	I0816 22:25:21.713515   11374 cri.go:76] found id: "e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a"
	I0816 22:25:21.713523   11374 cri.go:76] found id: "2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2"
	I0816 22:25:21.713529   11374 cri.go:76] found id: "53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf"
	I0816 22:25:21.713535   11374 cri.go:76] found id: "76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1"
	I0816 22:25:21.713541   11374 cri.go:76] found id: "69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549"
	I0816 22:25:21.713546   11374 cri.go:76] found id: "825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7"
	I0816 22:25:21.713551   11374 cri.go:76] found id: "7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612"
	I0816 22:25:21.713556   11374 cri.go:76] found id: "9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d"
	I0816 22:25:21.713566   11374 cri.go:76] found id: "97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a"
	I0816 22:25:21.713571   11374 cri.go:76] found id: "3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423"
	I0816 22:25:21.713578   11374 cri.go:76] found id: "8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4"
	I0816 22:25:21.713583   11374 cri.go:76] found id: "28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0"
	I0816 22:25:21.713588   11374 cri.go:76] found id: "a8503bd796d5d979a6e1b8b5154986e8b77de391b4f091211451ea5f52808e52"
	I0816 22:25:21.713595   11374 cri.go:76] found id: "124fa393359f758ea47161b345d2cab4ce486d4473a4caad483449464d44315f"
	I0816 22:25:21.713600   11374 cri.go:76] found id: "8710cefecdbe5d31cd44e9ae3378bc08cbc56001326a1cb38026755196cac7d1"
	I0816 22:25:21.713606   11374 cri.go:76] found id: "38dc61b214a9cbd019de4ca9ab52fb6baf728336de6d715df22b027522ad8b20"
	I0816 22:25:21.713616   11374 cri.go:76] found id: ""
	I0816 22:25:21.713667   11374 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0816 22:25:21.776610   11374 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c","pid":5197,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c/rootfs","created":"2021-08-16T22:25:12.066223998Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_4f138dc7-da0e-4775-b4de-b0f7d616b212"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d","pid":4260,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1718d2a0276cefe490a041b714377b70ea31
374bde370f898789e3a342438c2d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d/rootfs","created":"2021-08-16T22:24:38.416064875Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2","pid":4868,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2/rootfs","created":"2021-08-16T22:25:00.408067872Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy
","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb","pid":4355,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb/rootfs","created":"2021-08-16T22:24:38.822262087Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf","pid":4782,"statu
s":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf/rootfs","created":"2021-08-16T22:24:53.416372678Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549","pid":4797,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549/rootfs","created":"2021-08-16T22:24:53.509902629Z","annotations":{"io.kubernetes.cri.conta
iner-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1","pid":4789,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1/rootfs","created":"2021-08-16T22:24:53.511150294Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7","pid":4679,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.
task/k8s.io/825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7/rootfs","created":"2021-08-16T22:24:52.973956456Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb","pid":5035,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb/rootfs","created":"2021-08-16T22:25:00.913945951Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c
649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-gkxhz_5aa76749-775e-423d-bbf9-680a20a27051"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8","pid":4358,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8/rootfs","created":"2021-08-16T22:24:39.327096963Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e70dd80568a0a134cd147b42c9c85b176b8e5757001
2074e1f92a3b1a94bab9a","pid":5094,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a/rootfs","created":"2021-08-16T22:25:01.539181689Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a","pid":5251,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a/rootfs","created":"2021-08-16T22:25:12.818349297Z","anno
tations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a","pid":4383,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a/rootfs","created":"2021-08-16T22:24:39.213036413Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"feab707e
b735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03","pid":4283,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03/rootfs","created":"2021-08-16T22:24:38.475165162Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c"},"owner":"root"}]
	I0816 22:25:21.776809   11374 cri.go:113] list returned 14 containers
	I0816 22:25:21.776824   11374 cri.go:116] container: {ID:0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c Status:running}
	I0816 22:25:21.776838   11374 cri.go:118] skipping 0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c - not in ps
	I0816 22:25:21.776844   11374 cri.go:116] container: {ID:1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d Status:running}
	I0816 22:25:21.776850   11374 cri.go:118] skipping 1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d - not in ps
	I0816 22:25:21.776857   11374 cri.go:116] container: {ID:2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2 Status:paused}
	I0816 22:25:21.776874   11374 cri.go:122] skipping {2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2 paused}: state = "paused", want "running"
	I0816 22:25:21.776889   11374 cri.go:116] container: {ID:3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb Status:running}
	I0816 22:25:21.776895   11374 cri.go:118] skipping 3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb - not in ps
	I0816 22:25:21.776900   11374 cri.go:116] container: {ID:53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf Status:paused}
	I0816 22:25:21.776908   11374 cri.go:122] skipping {53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf paused}: state = "paused", want "running"
	I0816 22:25:21.776919   11374 cri.go:116] container: {ID:69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549 Status:running}
	I0816 22:25:21.776925   11374 cri.go:116] container: {ID:76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1 Status:running}
	I0816 22:25:21.776931   11374 cri.go:116] container: {ID:825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7 Status:running}
	I0816 22:25:21.776937   11374 cri.go:116] container: {ID:c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb Status:running}
	I0816 22:25:21.776949   11374 cri.go:118] skipping c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb - not in ps
	I0816 22:25:21.776954   11374 cri.go:116] container: {ID:d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8 Status:running}
	I0816 22:25:21.776960   11374 cri.go:118] skipping d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8 - not in ps
	I0816 22:25:21.776965   11374 cri.go:116] container: {ID:e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a Status:running}
	I0816 22:25:21.776970   11374 cri.go:116] container: {ID:f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a Status:running}
	I0816 22:25:21.776976   11374 cri.go:116] container: {ID:fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a Status:running}
	I0816 22:25:21.776986   11374 cri.go:118] skipping fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a - not in ps
	I0816 22:25:21.776991   11374 cri.go:116] container: {ID:feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03 Status:running}
	I0816 22:25:21.777003   11374 cri.go:118] skipping feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03 - not in ps
	I0816 22:25:21.777046   11374 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549
	I0816 22:25:21.802188   11374 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549 76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1
	I0816 22:25:26.089138   11374 out.go:177] 
	W0816 22:25:26.089549   11374 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549 76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:25:21Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549 76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:25:21Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0816 22:25:26.089566   11374 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0816 22:25:26.539648   11374 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0816 22:25:28.269097   11374 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:109: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-20210816222224-6986 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816222224-6986 -n pause-20210816222224-6986
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816222224-6986 -n pause-20210816222224-6986: exit status 2 (255.272329ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210816222224-6986 logs -n 25

                                                
                                                
=== CONT  TestPause/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210816222224-6986 logs -n 25: exit status 110 (14.252494553s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                  |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | multinode-20210816215441-6986          | multinode-20210816215441-6986          | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:08:07 UTC | Mon, 16 Aug 2021 22:11:11 UTC |
	|         | stop                                   |                                        |         |         |                               |                               |
	| start   | -p                                     | multinode-20210816215441-6986          | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:11:11 UTC | Mon, 16 Aug 2021 22:15:19 UTC |
	|         | multinode-20210816215441-6986          |                                        |         |         |                               |                               |
	|         | --wait=true -v=8                       |                                        |         |         |                               |                               |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | --driver=kvm2                          |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| start   | -p                                     | multinode-20210816215441-6986-m03      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:19 UTC | Mon, 16 Aug 2021 22:16:20 UTC |
	|         | multinode-20210816215441-6986-m03      |                                        |         |         |                               |                               |
	|         | --driver=kvm2                          |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| delete  | -p                                     | multinode-20210816215441-6986-m03      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:16:20 UTC | Mon, 16 Aug 2021 22:16:21 UTC |
	|         | multinode-20210816215441-6986-m03      |                                        |         |         |                               |                               |
	| delete  | -p                                     | multinode-20210816215441-6986          | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:16:21 UTC | Mon, 16 Aug 2021 22:16:23 UTC |
	|         | multinode-20210816215441-6986          |                                        |         |         |                               |                               |
	| start   | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:18:07 UTC | Mon, 16 Aug 2021 22:19:45 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | --wait=true --preload=false            |                                        |         |         |                               |                               |
	|         | --driver=kvm2                          |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0           |                                        |         |         |                               |                               |
	| ssh     | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:45 UTC | Mon, 16 Aug 2021 22:19:47 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | -- sudo crictl pull busybox            |                                        |         |         |                               |                               |
	| start   | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:48 UTC | Mon, 16 Aug 2021 22:20:39 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=kvm2         |                                        |         |         |                               |                               |
	|         |  --container-runtime=containerd        |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3           |                                        |         |         |                               |                               |
	| ssh     | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:39 UTC | Mon, 16 Aug 2021 22:20:39 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | -- sudo crictl image ls                |                                        |         |         |                               |                               |
	| delete  | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:39 UTC | Mon, 16 Aug 2021 22:20:40 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	| start   | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:40 UTC | Mon, 16 Aug 2021 22:21:45 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	|         | --memory=2048 --driver=kvm2            |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| stop    | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:46 UTC | Mon, 16 Aug 2021 22:21:46 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	|         | --cancel-scheduled                     |                                        |         |         |                               |                               |
	| stop    | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:58 UTC | Mon, 16 Aug 2021 22:22:05 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	|         | --schedule 5s                          |                                        |         |         |                               |                               |
	| delete  | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:23 UTC | Mon, 16 Aug 2021 22:22:24 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	| delete  | -p kubenet-20210816222224-6986         | kubenet-20210816222224-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:22:25 UTC |
	| delete  | -p false-20210816222225-6986           | false-20210816222225-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:25 UTC | Mon, 16 Aug 2021 22:22:25 UTC |
	| start   | -p                                     | force-systemd-env-20210816222224-6986  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:24:04 UTC |
	|         | force-systemd-env-20210816222224-6986  |                                        |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | -v=5 --driver=kvm2                     |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| -p      | force-systemd-env-20210816222224-6986  | force-systemd-env-20210816222224-6986  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:04 UTC | Mon, 16 Aug 2021 22:24:04 UTC |
	|         | ssh cat /etc/containerd/config.toml    |                                        |         |         |                               |                               |
	| delete  | -p                                     | force-systemd-env-20210816222224-6986  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:04 UTC | Mon, 16 Aug 2021 22:24:05 UTC |
	|         | force-systemd-env-20210816222224-6986  |                                        |         |         |                               |                               |
	| start   | -p pause-20210816222224-6986           | pause-20210816222224-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:24:28 UTC |
	|         | --memory=2048                          |                                        |         |         |                               |                               |
	|         | --install-addons=false                 |                                        |         |         |                               |                               |
	|         | --wait=all --driver=kvm2               |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| start   | -p                                     | kubernetes-upgrade-20210816222225-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:25 UTC | Mon, 16 Aug 2021 22:24:48 UTC |
	|         | kubernetes-upgrade-20210816222225-6986 |                                        |         |         |                               |                               |
	|         | --memory=2200                          |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0           |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=kvm2   |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| stop    | -p                                     | kubernetes-upgrade-20210816222225-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:49 UTC | Mon, 16 Aug 2021 22:24:53 UTC |
	|         | kubernetes-upgrade-20210816222225-6986 |                                        |         |         |                               |                               |
	| start   | -p                                     | offline-containerd-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:25:02 UTC |
	|         | offline-containerd-20210816222224-6986 |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048   |                                        |         |         |                               |                               |
	|         | --wait=true --driver=kvm2              |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| delete  | -p                                     | offline-containerd-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:02 UTC | Mon, 16 Aug 2021 22:25:03 UTC |
	|         | offline-containerd-20210816222224-6986 |                                        |         |         |                               |                               |
	| start   | -p pause-20210816222224-6986           | pause-20210816222224-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:28 UTC | Mon, 16 Aug 2021 22:25:13 UTC |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                     |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:24:54
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:24:54.079177   10879 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:24:54.079273   10879 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:24:54.079278   10879 out.go:311] Setting ErrFile to fd 2...
	I0816 22:24:54.079280   10879 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:24:54.079426   10879 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:24:54.079721   10879 out.go:305] Setting JSON to false
	I0816 22:24:54.187099   10879 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":4056,"bootTime":1629148638,"procs":185,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:24:54.187527   10879 start.go:121] virtualization: kvm guest
	I0816 22:24:54.190315   10879 out.go:177] * [kubernetes-upgrade-20210816222225-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:24:54.192235   10879 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:24:54.190469   10879 notify.go:169] Checking for updates...
	I0816 22:24:54.193922   10879 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:24:54.195578   10879 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:24:54.197163   10879 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:24:54.197582   10879 config.go:177] Loaded profile config "kubernetes-upgrade-20210816222225-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0816 22:24:54.197998   10879 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:24:54.198058   10879 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:24:54.215228   10879 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45677
	I0816 22:24:54.215770   10879 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:24:54.216328   10879 main.go:130] libmachine: Using API Version  1
	I0816 22:24:54.216350   10879 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:24:54.216734   10879 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:24:54.216908   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:24:54.217075   10879 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:24:54.217475   10879 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:24:54.217512   10879 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:24:54.229224   10879 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34399
	I0816 22:24:54.229593   10879 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:24:54.230067   10879 main.go:130] libmachine: Using API Version  1
	I0816 22:24:54.230093   10879 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:24:54.230460   10879 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:24:54.230643   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:24:54.279869   10879 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 22:24:54.279899   10879 start.go:278] selected driver: kvm2
	I0816 22:24:54.279906   10879 start.go:751] validating driver "kvm2" against &{Name:kubernetes-upgrade-20210816222225-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.14.0 ClusterName:kubernetes-upgrade-20210816222225-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.116.91 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:24:54.280014   10879 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 22:24:54.281335   10879 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:24:54.282098   10879 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 22:24:54.294712   10879 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0816 22:24:54.295176   10879 cni.go:93] Creating CNI manager for ""
	I0816 22:24:54.295202   10879 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:24:54.295212   10879 start_flags.go:277] config:
	{Name:kubernetes-upgrade-20210816222225-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210816222225-
6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.116.91 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:24:54.295364   10879 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:24:54.297417   10879 out.go:177] * Starting control plane node kubernetes-upgrade-20210816222225-6986 in cluster kubernetes-upgrade-20210816222225-6986
	I0816 22:24:54.297445   10879 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 22:24:54.297484   10879 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0816 22:24:54.297505   10879 cache.go:56] Caching tarball of preloaded images
	I0816 22:24:54.297634   10879 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 22:24:54.297656   10879 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on containerd
	I0816 22:24:54.297784   10879 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/config.json ...
	I0816 22:24:54.297977   10879 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:24:54.298007   10879 start.go:313] acquiring machines lock for kubernetes-upgrade-20210816222225-6986: {Name:mk808edd60d1305a42bb85791729eff4573dbb15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 22:24:54.298081   10879 start.go:317] acquired machines lock for "kubernetes-upgrade-20210816222225-6986" in 55.05µs
	I0816 22:24:54.298103   10879 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:24:54.298109   10879 fix.go:55] fixHost starting: 
	I0816 22:24:54.298510   10879 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:24:54.298561   10879 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:24:54.309226   10879 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33255
	I0816 22:24:54.309690   10879 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:24:54.310211   10879 main.go:130] libmachine: Using API Version  1
	I0816 22:24:54.310242   10879 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:24:54.310587   10879 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:24:54.310840   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:24:54.310996   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetState
	I0816 22:24:54.314433   10879 fix.go:108] recreateIfNeeded on kubernetes-upgrade-20210816222225-6986: state=Stopped err=<nil>
	I0816 22:24:54.314482   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	W0816 22:24:54.314626   10879 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:24:52.760695    9171 pod_ready.go:102] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:53.612575   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:24:53.632520   10732 api_server.go:70] duration metric: took 7.033030474s to wait for apiserver process to appear ...
	I0816 22:24:53.632561   10732 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:24:53.632570   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:24:53.633109   10732 api_server.go:255] stopped: https://192.168.50.226:8443/healthz: Get "https://192.168.50.226:8443/healthz": dial tcp 192.168.50.226:8443: connect: connection refused
	I0816 22:24:54.133848   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:24:54.316518   10879 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-20210816222225-6986" ...
	I0816 22:24:54.316550   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .Start
	I0816 22:24:54.316716   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Ensuring networks are active...
	I0816 22:24:54.318718   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Ensuring network default is active
	I0816 22:24:54.319156   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Ensuring network mk-kubernetes-upgrade-20210816222225-6986 is active
	I0816 22:24:54.319641   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Getting domain xml...
	I0816 22:24:54.321602   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Creating domain...
	I0816 22:24:54.783576   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Waiting to get IP...
	I0816 22:24:54.784705   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:24:54.785273   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has current primary IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:24:54.785327   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Found IP for machine: 192.168.116.91
	I0816 22:24:54.785348   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Reserving static IP address...
	I0816 22:24:54.785810   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-20210816222225-6986", mac: "52:54:00:92:67:21", ip: "192.168.116.91"} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:23:40 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:24:54.785842   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Reserved static IP address: 192.168.116.91
	I0816 22:24:54.785867   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | skip adding static IP to network mk-kubernetes-upgrade-20210816222225-6986 - found existing host DHCP lease matching {name: "kubernetes-upgrade-20210816222225-6986", mac: "52:54:00:92:67:21", ip: "192.168.116.91"}
	I0816 22:24:54.785897   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | Getting to WaitForSSH function...
	I0816 22:24:54.785911   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Waiting for SSH to be available...
	I0816 22:24:54.791673   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:24:54.792070   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:23:40 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:24:54.792097   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:24:54.792320   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | Using SSH client type: external
	I0816 22:24:54.792359   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa (-rw-------)
	I0816 22:24:54.792401   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.116.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:24:54.792424   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | About to run SSH command:
	I0816 22:24:54.792441   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | exit 0
	I0816 22:24:55.186584    9171 pod_ready.go:102] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:57.682612    9171 pod_ready.go:102] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:59.683949    9171 pod_ready.go:102] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"False"
	I0816 22:24:59.090396   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:24:59.090431   10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:24:59.133677   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:24:59.161347   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:24:59.161378   10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:24:59.633911   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:24:59.639524   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:24:59.639548   10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:25:00.133775   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:25:00.151749   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:25:00.151784   10732 api_server.go:101] status: https://192.168.50.226:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:25:00.633968   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:25:00.646578   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 200:
	ok
	I0816 22:25:00.661937   10732 api_server.go:139] control plane version: v1.21.3
	I0816 22:25:00.661961   10732 api_server.go:129] duration metric: took 7.029396002s to wait for apiserver health ...
	I0816 22:25:00.661972   10732 cni.go:93] Creating CNI manager for ""
	I0816 22:25:00.661979   10732 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:25:01.185512    9171 pod_ready.go:92] pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:01.185545    9171 pod_ready.go:81] duration metric: took 23.534022707s waiting for pod "coredns-558bd4d5db-jrjhw" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.185559    9171 pod_ready.go:78] waiting up to 6m0s for pod "etcd-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.215463    9171 pod_ready.go:92] pod "etcd-offline-containerd-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:01.215489    9171 pod_ready.go:81] duration metric: took 29.921986ms waiting for pod "etcd-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.215503    9171 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.230267    9171 pod_ready.go:92] pod "kube-apiserver-offline-containerd-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:01.230289    9171 pod_ready.go:81] duration metric: took 14.776227ms waiting for pod "kube-apiserver-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.230302    9171 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.241691    9171 pod_ready.go:92] pod "kube-controller-manager-offline-containerd-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:01.241717    9171 pod_ready.go:81] duration metric: took 11.405045ms waiting for pod "kube-controller-manager-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.241733    9171 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dhhrk" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.251986    9171 pod_ready.go:92] pod "kube-proxy-dhhrk" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:01.252017    9171 pod_ready.go:81] duration metric: took 10.275945ms waiting for pod "kube-proxy-dhhrk" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.252030    9171 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.580001    9171 pod_ready.go:92] pod "kube-scheduler-offline-containerd-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:01.580033    9171 pod_ready.go:81] duration metric: took 327.992243ms waiting for pod "kube-scheduler-offline-containerd-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:01.580046    9171 pod_ready.go:38] duration metric: took 36.483444375s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:25:01.580071    9171 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:25:01.580124    9171 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:01.597074    9171 api_server.go:70] duration metric: took 36.950719971s to wait for apiserver process to appear ...
	I0816 22:25:01.597104    9171 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:25:01.597117    9171 api_server.go:239] Checking apiserver healthz at https://192.168.105.22:8443/healthz ...
	I0816 22:25:01.604325    9171 api_server.go:265] https://192.168.105.22:8443/healthz returned 200:
	ok
	I0816 22:25:01.606279    9171 api_server.go:139] control plane version: v1.21.3
	I0816 22:25:01.606301    9171 api_server.go:129] duration metric: took 9.189625ms to wait for apiserver health ...
	I0816 22:25:01.606312    9171 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:25:01.788694    9171 system_pods.go:59] 7 kube-system pods found
	I0816 22:25:01.788767    9171 system_pods.go:61] "coredns-558bd4d5db-jrjhw" [acdb9f4c-484e-4e02-97c3-368ce130507e] Running
	I0816 22:25:01.788794    9171 system_pods.go:61] "etcd-offline-containerd-20210816222224-6986" [5cab4619-a033-47c0-9009-225ece0f2892] Running
	I0816 22:25:01.788801    9171 system_pods.go:61] "kube-apiserver-offline-containerd-20210816222224-6986" [ea1abce8-a6d2-4e57-81c9-97bdd5eefea4] Running
	I0816 22:25:01.788808    9171 system_pods.go:61] "kube-controller-manager-offline-containerd-20210816222224-6986" [9e75aa0c-4fd9-4812-9163-c6c1a26c9f2e] Running
	I0816 22:25:01.788813    9171 system_pods.go:61] "kube-proxy-dhhrk" [a48ab7f9-7dfc-47de-8aca-c172bea7ff31] Running
	I0816 22:25:01.788819    9171 system_pods.go:61] "kube-scheduler-offline-containerd-20210816222224-6986" [3dd47537-37cc-49f2-a469-8ef39825ba4a] Running
	I0816 22:25:01.788827    9171 system_pods.go:61] "storage-provisioner" [e6290b9f-d87d-488d-8f9e-7cbbc59d9585] Running
	I0816 22:25:01.788835    9171 system_pods.go:74] duration metric: took 182.517591ms to wait for pod list to return data ...
	I0816 22:25:01.788850    9171 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:25:01.981356    9171 default_sa.go:45] found service account: "default"
	I0816 22:25:01.981387    9171 default_sa.go:55] duration metric: took 192.530827ms for default service account to be created ...
	I0816 22:25:01.981399    9171 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:25:02.190487    9171 system_pods.go:86] 7 kube-system pods found
	I0816 22:25:02.190528    9171 system_pods.go:89] "coredns-558bd4d5db-jrjhw" [acdb9f4c-484e-4e02-97c3-368ce130507e] Running
	I0816 22:25:02.190538    9171 system_pods.go:89] "etcd-offline-containerd-20210816222224-6986" [5cab4619-a033-47c0-9009-225ece0f2892] Running
	I0816 22:25:02.190546    9171 system_pods.go:89] "kube-apiserver-offline-containerd-20210816222224-6986" [ea1abce8-a6d2-4e57-81c9-97bdd5eefea4] Running
	I0816 22:25:02.190554    9171 system_pods.go:89] "kube-controller-manager-offline-containerd-20210816222224-6986" [9e75aa0c-4fd9-4812-9163-c6c1a26c9f2e] Running
	I0816 22:25:02.190560    9171 system_pods.go:89] "kube-proxy-dhhrk" [a48ab7f9-7dfc-47de-8aca-c172bea7ff31] Running
	I0816 22:25:02.190567    9171 system_pods.go:89] "kube-scheduler-offline-containerd-20210816222224-6986" [3dd47537-37cc-49f2-a469-8ef39825ba4a] Running
	I0816 22:25:02.190573    9171 system_pods.go:89] "storage-provisioner" [e6290b9f-d87d-488d-8f9e-7cbbc59d9585] Running
	I0816 22:25:02.190582    9171 system_pods.go:126] duration metric: took 209.176198ms to wait for k8s-apps to be running ...
	I0816 22:25:02.190596    9171 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:25:02.190648    9171 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:25:02.207959    9171 system_svc.go:56] duration metric: took 17.354686ms WaitForService to wait for kubelet.
	I0816 22:25:02.207991    9171 kubeadm.go:547] duration metric: took 37.56164237s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:25:02.208036    9171 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:25:02.385401    9171 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:25:02.385432    9171 node_conditions.go:123] node cpu capacity is 2
	I0816 22:25:02.385444    9171 node_conditions.go:105] duration metric: took 177.399541ms to run NodePressure ...
	I0816 22:25:02.385455    9171 start.go:231] waiting for startup goroutines ...
	I0816 22:25:02.438114    9171 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:25:02.440691    9171 out.go:177] * Done! kubectl is now configured to use "offline-containerd-20210816222224-6986" cluster and "default" namespace by default
	I0816 22:25:00.663954   10732 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:25:00.664005   10732 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:25:00.674379   10732 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:25:00.699896   10732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:25:00.718704   10732 system_pods.go:59] 6 kube-system pods found
	I0816 22:25:00.718763   10732 system_pods.go:61] "coredns-558bd4d5db-gkxhz" [5aa76749-775e-423d-bbf9-680a20a27051] Running
	I0816 22:25:00.718780   10732 system_pods.go:61] "etcd-pause-20210816222224-6986" [f621b99e-0604-4bed-8c4e-4f5741e52f7b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 22:25:00.718802   10732 system_pods.go:61] "kube-apiserver-pause-20210816222224-6986" [b1c46709-4b0b-4c9c-a701-d595a58214ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 22:25:00.718811   10732 system_pods.go:61] "kube-controller-manager-pause-20210816222224-6986" [777c035e-5f34-469a-afb5-4f8ef90ccbfb] Running
	I0816 22:25:00.718819   10732 system_pods.go:61] "kube-proxy-7l59t" [3c0e0899-31c1-477a-a6d4-2844091deea2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:25:00.718830   10732 system_pods.go:61] "kube-scheduler-pause-20210816222224-6986" [6b32acf9-8108-45a6-901e-70cd125190f8] Running
	I0816 22:25:00.718838   10732 system_pods.go:74] duration metric: took 18.921493ms to wait for pod list to return data ...
	I0816 22:25:00.718847   10732 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:25:00.723789   10732 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:25:00.723820   10732 node_conditions.go:123] node cpu capacity is 2
	I0816 22:25:00.723836   10732 node_conditions.go:105] duration metric: took 4.978152ms to run NodePressure ...
	I0816 22:25:00.723854   10732 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:01.396623   10732 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:25:01.403109   10732 kubeadm.go:746] kubelet initialised
	I0816 22:25:01.403139   10732 kubeadm.go:747] duration metric: took 6.492031ms waiting for restarted kubelet to initialise ...
	I0816 22:25:01.403151   10732 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:25:01.409386   10732 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:03.432924   10732 pod_ready.go:102] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:05.435685   10732 pod_ready.go:102] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:05.951433   10732 pod_ready.go:92] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:05.951457   10732 pod_ready.go:81] duration metric: took 4.542029801s waiting for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:05.951470   10732 pod_ready.go:78] waiting up to 4m0s for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:06.969870   10732 pod_ready.go:92] pod "etcd-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:06.969903   10732 pod_ready.go:81] duration metric: took 1.018424787s waiting for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:06.969918   10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:06.978963   10732 pod_ready.go:92] pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:06.978984   10732 pod_ready.go:81] duration metric: took 9.058114ms waiting for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:06.978997   10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:07.986911   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | SSH cmd err, output: <nil>: 
	I0816 22:25:07.987289   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetConfigRaw
	I0816 22:25:07.988117   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetIP
	I0816 22:25:07.993471   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:07.993933   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:07.993970   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:07.994335   10879 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/config.json ...
	I0816 22:25:07.994547   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:07.994761   10879 machine.go:88] provisioning docker machine ...
	I0816 22:25:07.994788   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:07.994976   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetMachineName
	I0816 22:25:07.995114   10879 buildroot.go:166] provisioning hostname "kubernetes-upgrade-20210816222225-6986"
	I0816 22:25:07.995139   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetMachineName
	I0816 22:25:07.995291   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.000173   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.000497   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.000524   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.000680   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
	I0816 22:25:08.000825   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.000965   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.001081   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
	I0816 22:25:08.001235   10879 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:08.001401   10879 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.116.91 22 <nil> <nil>}
	I0816 22:25:08.001421   10879 main.go:130] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-20210816222225-6986 && echo "kubernetes-upgrade-20210816222225-6986" | sudo tee /etc/hostname
	I0816 22:25:08.156978   10879 main.go:130] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-20210816222225-6986
	
	I0816 22:25:08.157018   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.162417   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.162702   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.162735   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.162864   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
	I0816 22:25:08.163064   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.163277   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.163406   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
	I0816 22:25:08.163558   10879 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:08.163733   10879 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.116.91 22 <nil> <nil>}
	I0816 22:25:08.163761   10879 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-20210816222225-6986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-20210816222225-6986/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-20210816222225-6986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:25:08.307005   10879 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:25:08.307035   10879 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:25:08.307053   10879 buildroot.go:174] setting up certificates
	I0816 22:25:08.307064   10879 provision.go:83] configureAuth start
	I0816 22:25:08.307075   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetMachineName
	I0816 22:25:08.307332   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetIP
	I0816 22:25:08.313331   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.313697   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.313729   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.313896   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.318531   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.318844   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.318878   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.318990   10879 provision.go:138] copyHostCerts
	I0816 22:25:08.319059   10879 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:25:08.319073   10879 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:25:08.319128   10879 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:25:08.319254   10879 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:25:08.319268   10879 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:25:08.319294   10879 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:25:08.319359   10879 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:25:08.319368   10879 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:25:08.319397   10879 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
	I0816 22:25:08.319465   10879 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-20210816222225-6986 san=[192.168.116.91 192.168.116.91 localhost 127.0.0.1 minikube kubernetes-upgrade-20210816222225-6986]
	I0816 22:25:08.473458   10879 provision.go:172] copyRemoteCerts
	I0816 22:25:08.473513   10879 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:25:08.473535   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.478720   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.479123   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.479157   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.479301   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
	I0816 22:25:08.479517   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.479669   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
	I0816 22:25:08.479802   10879 sshutil.go:53] new ssh client: &{IP:192.168.116.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa Username:docker}
	I0816 22:25:08.575404   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:25:08.593200   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1285 bytes)
	I0816 22:25:08.611874   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:25:08.631651   10879 provision.go:86] duration metric: configureAuth took 324.57656ms
	I0816 22:25:08.631679   10879 buildroot.go:189] setting minikube options for container-runtime
	I0816 22:25:08.631847   10879 config.go:177] Loaded profile config "kubernetes-upgrade-20210816222225-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0816 22:25:08.631862   10879 machine.go:91] provisioned docker machine in 637.081285ms
	I0816 22:25:08.631877   10879 start.go:267] post-start starting for "kubernetes-upgrade-20210816222225-6986" (driver="kvm2")
	I0816 22:25:08.631885   10879 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:25:08.631905   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:08.632222   10879 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:25:08.632262   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.638223   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.638599   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.638628   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.638804   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
	I0816 22:25:08.639025   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.639186   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
	I0816 22:25:08.639324   10879 sshutil.go:53] new ssh client: &{IP:192.168.116.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa Username:docker}
	I0816 22:25:08.731490   10879 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:25:08.736384   10879 info.go:137] Remote host: Buildroot 2020.02.12
	I0816 22:25:08.736415   10879 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:25:08.736479   10879 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:25:08.736640   10879 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0816 22:25:08.736796   10879 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:25:08.744563   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:25:08.762219   10879 start.go:270] post-start completed in 130.327769ms
	I0816 22:25:08.762269   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:08.762532   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.768066   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.768447   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.768479   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.768580   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
	I0816 22:25:08.768764   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.768937   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.769097   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
	I0816 22:25:08.769278   10879 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:08.769412   10879 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.116.91 22 <nil> <nil>}
	I0816 22:25:08.769423   10879 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0816 22:25:08.908369   10879 main.go:130] libmachine: SSH cmd err, output: <nil>: 1629152708.857933809
	
	I0816 22:25:08.908397   10879 fix.go:212] guest clock: 1629152708.857933809
	I0816 22:25:08.908407   10879 fix.go:225] Guest: 2021-08-16 22:25:08.857933809 +0000 UTC Remote: 2021-08-16 22:25:08.762514681 +0000 UTC m=+14.743694760 (delta=95.419128ms)
	I0816 22:25:08.908465   10879 fix.go:196] guest clock delta is within tolerance: 95.419128ms
	I0816 22:25:08.908473   10879 fix.go:57] fixHost completed within 14.610364111s
	I0816 22:25:08.908483   10879 start.go:80] releasing machines lock for "kubernetes-upgrade-20210816222225-6986", held for 14.610387547s
	I0816 22:25:08.908527   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:08.908801   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetIP
	I0816 22:25:08.914888   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.915258   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.915290   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.915507   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:08.915732   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:08.916309   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .DriverName
	I0816 22:25:08.916592   10879 ssh_runner.go:149] Run: systemctl --version
	I0816 22:25:08.916617   10879 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:25:08.916626   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.916658   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHHostname
	I0816 22:25:08.923331   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.923688   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.923714   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.923808   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
	I0816 22:25:08.923961   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.924114   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
	I0816 22:25:08.924243   10879 sshutil.go:53] new ssh client: &{IP:192.168.116.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa Username:docker}
	I0816 22:25:08.924528   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.924867   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:08.924898   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:08.925049   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHPort
	I0816 22:25:08.925209   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHKeyPath
	I0816 22:25:08.925407   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetSSHUsername
	I0816 22:25:08.925534   10879 sshutil.go:53] new ssh client: &{IP:192.168.116.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/kubernetes-upgrade-20210816222225-6986/id_rsa Username:docker}
	I0816 22:25:09.022865   10879 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 22:25:09.023038   10879 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:09.000201   10732 pod_ready.go:102] pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:25:10.499577   10732 pod_ready.go:92] pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:10.499613   10732 pod_ready.go:81] duration metric: took 3.520603411s waiting for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.499631   10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.508715   10732 pod_ready.go:92] pod "kube-proxy-7l59t" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:10.508738   10732 pod_ready.go:81] duration metric: took 9.098529ms waiting for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.508749   10732 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.514516   10732 pod_ready.go:92] pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:10.514536   10732 pod_ready.go:81] duration metric: took 5.779042ms waiting for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.514546   10732 pod_ready.go:38] duration metric: took 9.111379533s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:25:10.514567   10732 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:25:10.530219   10732 ops.go:34] apiserver oom_adj: -16
	I0816 22:25:10.530242   10732 kubeadm.go:604] restartCluster took 31.19958524s
	I0816 22:25:10.530251   10732 kubeadm.go:392] StartCluster complete in 31.557512009s
	I0816 22:25:10.530271   10732 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:25:10.530404   10732 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:10.531238   10732 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:25:10.532000   10732 kapi.go:59] client config for pause-20210816222224-6986: &rest.Config{Host:"https://192.168.50.226:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 22:25:10.647656   10732 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "pause-20210816222224-6986" rescaled to 1
	I0816 22:25:10.647728   10732 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.50.226 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:25:10.647757   10732 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:25:10.647794   10732 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0816 22:25:10.649327   10732 out.go:177] * Verifying Kubernetes components...
	I0816 22:25:10.649398   10732 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:25:10.647852   10732 addons.go:59] Setting storage-provisioner=true in profile "pause-20210816222224-6986"
	I0816 22:25:10.647862   10732 addons.go:59] Setting default-storageclass=true in profile "pause-20210816222224-6986"
	I0816 22:25:10.647991   10732 config.go:177] Loaded profile config "pause-20210816222224-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:25:10.649480   10732 addons.go:135] Setting addon storage-provisioner=true in "pause-20210816222224-6986"
	W0816 22:25:10.649500   10732 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:25:10.649516   10732 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "pause-20210816222224-6986"
	I0816 22:25:10.649532   10732 host.go:66] Checking if "pause-20210816222224-6986" exists ...
	I0816 22:25:10.650748   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:10.650827   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:10.653189   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:10.653249   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:10.664888   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45461
	I0816 22:25:10.665365   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:10.665893   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:25:10.665915   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:10.666315   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:10.666493   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
	I0816 22:25:10.667827   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34733
	I0816 22:25:10.668293   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:10.668762   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:25:10.668782   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:10.669202   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:10.669761   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:10.669802   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:10.670861   10732 kapi.go:59] client config for pause-20210816222224-6986: &rest.Config{Host:"https://192.168.50.226:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/pause-20210816222224-6986/c
lient.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 22:25:10.676486   10732 addons.go:135] Setting addon default-storageclass=true in "pause-20210816222224-6986"
	W0816 22:25:10.676510   10732 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:25:10.676539   10732 host.go:66] Checking if "pause-20210816222224-6986" exists ...
	I0816 22:25:10.676985   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:10.677031   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:10.682317   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39313
	I0816 22:25:10.682805   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:10.683360   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:25:10.683382   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:10.683737   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:10.683924   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
	I0816 22:25:10.687519   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:25:10.693597   10732 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:25:10.693708   10732 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:25:10.693722   10732 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:25:10.693742   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
	I0816 22:25:10.692712   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45043
	I0816 22:25:10.694563   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:10.695082   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:25:10.695103   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:10.695455   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:10.696063   10732 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:10.696115   10732 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:10.700367   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:25:10.700792   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
	I0816 22:25:10.700813   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:25:10.701111   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
	I0816 22:25:10.701350   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
	I0816 22:25:10.701537   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
	I0816 22:25:10.701730   10732 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
	I0816 22:25:10.709887   10732 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33339
	I0816 22:25:10.710304   10732 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:10.710912   10732 main.go:130] libmachine: Using API Version  1
	I0816 22:25:10.710938   10732 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:10.711336   10732 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:10.711547   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
	I0816 22:25:10.714430   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:25:10.714683   10732 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:25:10.714702   10732 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:25:10.714720   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
	I0816 22:25:10.720808   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:25:10.721319   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
	I0816 22:25:10.721342   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:25:10.721485   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
	I0816 22:25:10.721643   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
	I0816 22:25:10.721769   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
	I0816 22:25:10.721919   10732 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
	I0816 22:25:10.832212   10732 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:25:10.862755   10732 node_ready.go:35] waiting up to 6m0s for node "pause-20210816222224-6986" to be "Ready" ...
	I0816 22:25:10.863120   10732 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0816 22:25:10.867110   10732 node_ready.go:49] node "pause-20210816222224-6986" has status "Ready":"True"
	I0816 22:25:10.867130   10732 node_ready.go:38] duration metric: took 4.344058ms waiting for node "pause-20210816222224-6986" to be "Ready" ...
	I0816 22:25:10.867143   10732 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:25:10.883113   10732 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.892065   10732 pod_ready.go:92] pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:10.892084   10732 pod_ready.go:81] duration metric: took 8.944517ms waiting for pod "coredns-558bd4d5db-gkxhz" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.892096   10732 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:10.895462   10732 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:25:11.127716   10732 pod_ready.go:92] pod "etcd-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:11.127749   10732 pod_ready.go:81] duration metric: took 235.644563ms waiting for pod "etcd-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:11.127765   10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:11.536655   10732 pod_ready.go:92] pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:11.536676   10732 pod_ready.go:81] duration metric: took 408.901449ms waiting for pod "kube-apiserver-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:11.536690   10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:11.539596   10732 main.go:130] libmachine: Making call to close driver server
	I0816 22:25:11.539618   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
	I0816 22:25:11.539697   10732 main.go:130] libmachine: Making call to close driver server
	I0816 22:25:11.539725   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
	I0816 22:25:11.540009   10732 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:25:11.540024   10732 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:25:11.540041   10732 main.go:130] libmachine: Making call to close driver server
	I0816 22:25:11.540041   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | Closing plugin on server side
	I0816 22:25:11.540051   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
	I0816 22:25:11.540067   10732 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:25:11.540075   10732 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:25:11.540083   10732 main.go:130] libmachine: Making call to close driver server
	I0816 22:25:11.540092   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
	I0816 22:25:11.540126   10732 main.go:130] libmachine: (pause-20210816222224-6986) DBG | Closing plugin on server side
	I0816 22:25:11.540298   10732 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:25:11.540310   10732 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:25:11.540320   10732 main.go:130] libmachine: Making call to close driver server
	I0816 22:25:11.540329   10732 main.go:130] libmachine: (pause-20210816222224-6986) Calling .Close
	I0816 22:25:11.540417   10732 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:25:11.540429   10732 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:25:11.540490   10732 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:25:11.540502   10732 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:25:11.542638   10732 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0816 22:25:11.542662   10732 addons.go:344] enableAddons completed in 894.875902ms
	I0816 22:25:11.931820   10732 pod_ready.go:92] pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:11.931845   10732 pod_ready.go:81] duration metric: took 395.147421ms waiting for pod "kube-controller-manager-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:11.931860   10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:12.329464   10732 pod_ready.go:92] pod "kube-proxy-7l59t" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:12.329493   10732 pod_ready.go:81] duration metric: took 397.623774ms waiting for pod "kube-proxy-7l59t" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:12.329507   10732 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:12.734335   10732 pod_ready.go:92] pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:25:12.734360   10732 pod_ready.go:81] duration metric: took 404.844565ms waiting for pod "kube-scheduler-pause-20210816222224-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:25:12.734374   10732 pod_ready.go:38] duration metric: took 1.867218741s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:25:12.734394   10732 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:25:12.734439   10732 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:12.754510   10732 api_server.go:70] duration metric: took 2.106745047s to wait for apiserver process to appear ...
	I0816 22:25:12.754540   10732 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:25:12.754553   10732 api_server.go:239] Checking apiserver healthz at https://192.168.50.226:8443/healthz ...
	I0816 22:25:12.792067   10732 api_server.go:265] https://192.168.50.226:8443/healthz returned 200:
	ok
	I0816 22:25:12.794542   10732 api_server.go:139] control plane version: v1.21.3
	I0816 22:25:12.794565   10732 api_server.go:129] duration metric: took 40.01886ms to wait for apiserver health ...
	I0816 22:25:12.794577   10732 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:25:12.941013   10732 system_pods.go:59] 7 kube-system pods found
	I0816 22:25:12.941048   10732 system_pods.go:61] "coredns-558bd4d5db-gkxhz" [5aa76749-775e-423d-bbf9-680a20a27051] Running
	I0816 22:25:12.941053   10732 system_pods.go:61] "etcd-pause-20210816222224-6986" [f621b99e-0604-4bed-8c4e-4f5741e52f7b] Running
	I0816 22:25:12.941057   10732 system_pods.go:61] "kube-apiserver-pause-20210816222224-6986" [b1c46709-4b0b-4c9c-a701-d595a58214ba] Running
	I0816 22:25:12.941102   10732 system_pods.go:61] "kube-controller-manager-pause-20210816222224-6986" [777c035e-5f34-469a-afb5-4f8ef90ccbfb] Running
	I0816 22:25:12.941116   10732 system_pods.go:61] "kube-proxy-7l59t" [3c0e0899-31c1-477a-a6d4-2844091deea2] Running
	I0816 22:25:12.941122   10732 system_pods.go:61] "kube-scheduler-pause-20210816222224-6986" [6b32acf9-8108-45a6-901e-70cd125190f8] Running
	I0816 22:25:12.941136   10732 system_pods.go:61] "storage-provisioner" [4f138dc7-da0e-4775-b4de-b0f7d616b212] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:25:12.941158   10732 system_pods.go:74] duration metric: took 146.575596ms to wait for pod list to return data ...
	I0816 22:25:12.941176   10732 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:25:13.132349   10732 default_sa.go:45] found service account: "default"
	I0816 22:25:13.132381   10732 default_sa.go:55] duration metric: took 191.195172ms for default service account to be created ...
	I0816 22:25:13.132394   10732 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:25:13.340094   10732 system_pods.go:86] 7 kube-system pods found
	I0816 22:25:13.340135   10732 system_pods.go:89] "coredns-558bd4d5db-gkxhz" [5aa76749-775e-423d-bbf9-680a20a27051] Running
	I0816 22:25:13.340146   10732 system_pods.go:89] "etcd-pause-20210816222224-6986" [f621b99e-0604-4bed-8c4e-4f5741e52f7b] Running
	I0816 22:25:13.340155   10732 system_pods.go:89] "kube-apiserver-pause-20210816222224-6986" [b1c46709-4b0b-4c9c-a701-d595a58214ba] Running
	I0816 22:25:13.340163   10732 system_pods.go:89] "kube-controller-manager-pause-20210816222224-6986" [777c035e-5f34-469a-afb5-4f8ef90ccbfb] Running
	I0816 22:25:13.340172   10732 system_pods.go:89] "kube-proxy-7l59t" [3c0e0899-31c1-477a-a6d4-2844091deea2] Running
	I0816 22:25:13.340184   10732 system_pods.go:89] "kube-scheduler-pause-20210816222224-6986" [6b32acf9-8108-45a6-901e-70cd125190f8] Running
	I0816 22:25:13.340196   10732 system_pods.go:89] "storage-provisioner" [4f138dc7-da0e-4775-b4de-b0f7d616b212] Running
	I0816 22:25:13.340210   10732 system_pods.go:126] duration metric: took 207.809217ms to wait for k8s-apps to be running ...
	I0816 22:25:13.340225   10732 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:25:13.340279   10732 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:25:13.358716   10732 system_svc.go:56] duration metric: took 18.47804ms WaitForService to wait for kubelet.
	I0816 22:25:13.358752   10732 kubeadm.go:547] duration metric: took 2.710991068s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:25:13.358785   10732 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:25:13.536797   10732 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:25:13.536830   10732 node_conditions.go:123] node cpu capacity is 2
	I0816 22:25:13.536848   10732 node_conditions.go:105] duration metric: took 178.056493ms to run NodePressure ...
	I0816 22:25:13.536863   10732 start.go:231] waiting for startup goroutines ...
	I0816 22:25:13.602415   10732 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:25:13.604425   10732 out.go:177] * Done! kubectl is now configured to use "pause-20210816222224-6986" cluster and "default" namespace by default
	I0816 22:25:13.045168   10879 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.02209826s)
	I0816 22:25:13.045290   10879 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0816 22:25:13.045383   10879 ssh_runner.go:149] Run: which lz4
	I0816 22:25:13.050542   10879 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0816 22:25:13.055627   10879 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0816 22:25:13.055661   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (945588089 bytes)
	I0816 22:25:16.804981   10879 containerd.go:546] Took 3.754500 seconds to copy over tarball
	I0816 22:25:16.805050   10879 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	f04c445038901       6e38f40d628db       16 seconds ago       Running             storage-provisioner       0                   0e10d9862204b
	e70dd80568a0a       296a6d5035e2d       27 seconds ago       Running             coredns                   1                   c649190b7c07d
	2585772c8a261       adb2816ea823a       29 seconds ago       Running             kube-proxy                2                   d73b4cafe25f0
	53780b2759956       3d174f00aa39e       36 seconds ago       Running             kube-apiserver            2                   fb9f201b2c2e1
	76fef890edebe       6be0dc1302e30       36 seconds ago       Running             kube-scheduler            2                   1718d2a0276ce
	69a7fab4848c4       0369cf4303ffd       36 seconds ago       Running             etcd                      2                   3b9459ff3a0d8
	825e79d62718c       bc2bb319a7038       36 seconds ago       Running             kube-controller-manager   2                   feab707eb735a
	7626b842ef886       3d174f00aa39e       36 seconds ago       Created             kube-apiserver            1                   fb9f201b2c2e1
	9d9f34b35e099       adb2816ea823a       36 seconds ago       Created             kube-proxy                1                   d73b4cafe25f0
	97c4cc3614116       6be0dc1302e30       36 seconds ago       Created             kube-scheduler            1                   1718d2a0276ce
	3644e35e40a2f       0369cf4303ffd       36 seconds ago       Created             etcd                      1                   3b9459ff3a0d8
	8c5f2c007cff4       bc2bb319a7038       40 seconds ago       Created             kube-controller-manager   1                   feab707eb735a
	28c7161cd49a4       296a6d5035e2d       About a minute ago   Exited              coredns                   0                   05c2427240818
	a8503bd796d5d       adb2816ea823a       About a minute ago   Exited              kube-proxy                0                   a86c3b6ee3a70
	124fa393359f7       0369cf4303ffd       2 minutes ago        Exited              etcd                      0                   94a493a65b593
	8710cefecdbe5       6be0dc1302e30       2 minutes ago        Exited              kube-scheduler            0                   982e66890a90d
	38dc61b214a9c       3d174f00aa39e       2 minutes ago        Exited              kube-apiserver            0                   630ed9d4644e9
	
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2021-08-16 22:22:35 UTC, end at Mon 2021-08-16 22:25:29 UTC. --
	Aug 16 22:24:53 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:24:53.606374155Z" level=info msg="StartContainer for \"69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549\" returns successfully"
	Aug 16 22:24:53 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:24:53.687984942Z" level=info msg="StartContainer for \"76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1\" returns successfully"
	Aug 16 22:24:59 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:24:59.121993631Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.146522428Z" level=info msg="CreateContainer within sandbox \"d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:2,}"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.231610260Z" level=info msg="CreateContainer within sandbox \"d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8\" for &ContainerMetadata{Name:kube-proxy,Attempt:2,} returns container id \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.233198734Z" level=info msg="StartContainer for \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.443953465Z" level=info msg="StopPodSandbox for \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.444769081Z" level=info msg="Container to stop \"28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.461877463Z" level=info msg="StartContainer for \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\" returns successfully"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536079353Z" level=info msg="TearDown network for sandbox \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\" successfully"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536191167Z" level=info msg="StopPodSandbox for \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\" returns successfully"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536962082Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-gkxhz,Uid:5aa76749-775e-423d-bbf9-680a20a27051,Namespace:kube-system,Attempt:1,}"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.776744568Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb pid=5007
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.290447333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-gkxhz,Uid:5aa76749-775e-423d-bbf9-680a20a27051,Namespace:kube-system,Attempt:1,} returns sandbox id \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\""
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.300600113Z" level=info msg="CreateContainer within sandbox \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.389478760Z" level=info msg="CreateContainer within sandbox \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\""
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.397162604Z" level=info msg="StartContainer for \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\""
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.594046909Z" level=info msg="StartContainer for \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\" returns successfully"
	Aug 16 22:25:11 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:11.852957632Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:4f138dc7-da0e-4775-b4de-b0f7d616b212,Namespace:kube-system,Attempt:0,}"
	Aug 16 22:25:11 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:11.903771908Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c pid=5174
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.439549893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:4f138dc7-da0e-4775-b4de-b0f7d616b212,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\""
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.451930506Z" level=info msg="CreateContainer within sandbox \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.521875733Z" level=info msg="CreateContainer within sandbox \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.523292924Z" level=info msg="StartContainer for \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.851898064Z" level=info msg="StartContainer for \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\" returns successfully"
	
	* 
	* ==> coredns [28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0] <==
	* I0816 22:24:19.170128       1 trace.go:205] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.168) (total time: 30001ms):
	Trace[2019727887]: [30.001909435s] [30.001909435s] END
	E0816 22:24:19.170279       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0816 22:24:19.171047       1 trace.go:205] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.166) (total time: 30004ms):
	Trace[939984059]: [30.004733433s] [30.004733433s] END
	E0816 22:24:19.171149       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0816 22:24:19.171258       1 trace.go:205] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.166) (total time: 30004ms):
	Trace[911902081]: [30.004945736s] [30.004945736s] END
	E0816 22:24:19.171265       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7ae91e86dd75dee9ae501cb58003198b
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.985023] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1731 comm=systemd-network
	[  +1.088197] vboxguest: loading out-of-tree module taints kernel.
	[  +0.006251] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.889854] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[ +16.286436] systemd-fstab-generator[2098]: Ignoring "noauto" for root device
	[  +0.258185] systemd-fstab-generator[2128]: Ignoring "noauto" for root device
	[  +0.135377] systemd-fstab-generator[2143]: Ignoring "noauto" for root device
	[  +0.180446] systemd-fstab-generator[2173]: Ignoring "noauto" for root device
	[Aug16 22:23] systemd-fstab-generator[2381]: Ignoring "noauto" for root device
	[ +20.504547] systemd-fstab-generator[2808]: Ignoring "noauto" for root device
	[ +20.717915] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.551219] kauditd_printk_skb: 104 callbacks suppressed
	[Aug16 22:24] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.792051] systemd-fstab-generator[3754]: Ignoring "noauto" for root device
	[  +0.176916] systemd-fstab-generator[3767]: Ignoring "noauto" for root device
	[  +0.230657] systemd-fstab-generator[3792]: Ignoring "noauto" for root device
	[  +4.083098] kauditd_printk_skb: 2 callbacks suppressed
	[  +3.840195] NFSD: Unable to end grace period: -110
	[  +4.324119] systemd-fstab-generator[4543]: Ignoring "noauto" for root device
	[  +6.680726] kauditd_printk_skb: 29 callbacks suppressed
	[Aug16 22:25] kauditd_printk_skb: 14 callbacks suppressed
	[ +12.641213] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.313576] systemd-fstab-generator[5666]: Ignoring "noauto" for root device
	[  +0.847782] systemd-fstab-generator[5723]: Ignoring "noauto" for root device
	[  +1.051927] systemd-fstab-generator[5775]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [124fa393359f758ea47161b345d2cab4ce486d4473a4caad483449464d44315f] <==
	* 2021-08-16 22:23:41.064197 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210816222224-6986\" " with result "range_response_count:1 size:5052" took too long (6.421187445s) to execute
	2021-08-16 22:23:41.065847 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:341" took too long (6.446897155s) to execute
	2021-08-16 22:23:41.066285 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/node-controller\" " with result "range_response_count:1 size:242" took too long (5.09674902s) to execute
	2021-08-16 22:23:41.068005 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (6.28196539s) to execute
	2021-08-16 22:23:41.068259 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (763.710719ms) to execute
	2021-08-16 22:23:41.880435 W | etcdserver: read-only range request "key:\"/registry/masterleases/192.168.50.226\" " with result "range_response_count:0 size:5" took too long (776.335267ms) to execute
	2021-08-16 22:23:41.881080 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (597.366064ms) to execute
	2021-08-16 22:23:41.882354 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:4569" took too long (763.841142ms) to execute
	2021-08-16 22:23:41.883287 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (621.677263ms) to execute
	2021-08-16 22:23:41.884722 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (481.499599ms) to execute
	2021-08-16 22:23:41.885189 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-pause-20210816222224-6986\" " with result "range_response_count:1 size:5421" took too long (772.180278ms) to execute
	2021-08-16 22:23:42.453217 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (290.061418ms) to execute
	2021-08-16 22:23:42.455427 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (285.893643ms) to execute
	2021-08-16 22:23:42.456943 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-pause-20210816222224-6986\" " with result "range_response_count:1 size:6314" took too long (153.946258ms) to execute
	2021-08-16 22:23:42.458024 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (177.825431ms) to execute
	2021-08-16 22:23:44.267832 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:23:54.092150 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (701.802797ms) to execute
	2021-08-16 22:23:54.093518 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-ctgf5\" " with result "range_response_count:1 size:4473" took too long (1.090386256s) to execute
	2021-08-16 22:23:54.267392 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:23:57.768234 W | etcdserver: request "header:<ID:4263355585347158035 > lease_revoke:<id:3b2a7b510fcb7e67>" with result "size:29" took too long (771.90226ms) to execute
	2021-08-16 22:23:57.768903 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (374.444829ms) to execute
	2021-08-16 22:23:57.769379 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-ctgf5\" " with result "range_response_count:1 size:4473" took too long (765.115046ms) to execute
	2021-08-16 22:24:04.267548 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:24:14.267958 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:24:24.268321 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> etcd [3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423] <==
	* 
	* ==> etcd [69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549] <==
	* 2021-08-16 22:24:53.773065 W | auth: simple token is not cryptographically signed
	2021-08-16 22:24:53.837118 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	raft2021/08/16 22:24:53 INFO: e840193bf29c3b2a switched to configuration voters=(16735403960572853034)
	2021-08-16 22:24:53.849298 I | etcdserver/membership: added member e840193bf29c3b2a [https://192.168.50.226:2380] to cluster 99b90e1bea73c730
	2021-08-16 22:24:53.860198 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-16 22:24:53.864997 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-16 22:24:53.865214 I | embed: listening for peers on 192.168.50.226:2380
	2021-08-16 22:24:53.868083 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-16 22:24:53.871735 I | etcdserver/api: enabled capabilities for version 3.4
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a is starting a new election at term 2
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a became candidate at term 3
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a received MsgVoteResp from e840193bf29c3b2a at term 3
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a became leader at term 3
	raft2021/08/16 22:24:54 INFO: raft.node: e840193bf29c3b2a elected leader e840193bf29c3b2a at term 3
	2021-08-16 22:24:54.968820 I | embed: ready to serve client requests
	2021-08-16 22:24:54.969394 I | etcdserver: published {Name:pause-20210816222224-6986 ClientURLs:[https://192.168.50.226:2379]} to cluster 99b90e1bea73c730
	2021-08-16 22:24:54.971284 I | embed: serving client requests on 192.168.50.226:2379
	2021-08-16 22:24:54.971462 I | embed: ready to serve client requests
	2021-08-16 22:24:54.973508 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-16 22:25:03.067902 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-gkxhz\" " with result "range_response_count:1 size:4860" took too long (140.807991ms) to execute
	2021-08-16 22:25:06.747736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:25:08.138740 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:25:10.645123 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:3838" took too long (108.124514ms) to execute
	2021-08-16 22:25:10.645989 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:665" took too long (107.967343ms) to execute
	2021-08-16 22:25:18.137756 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  22:25:41 up 3 min,  0 users,  load average: 2.40, 1.48, 0.59
	Linux pause-20210816222224-6986 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [38dc61b214a9cbd019de4ca9ab52fb6baf728336de6d715df22b027522ad8b20] <==
	* I0816 22:23:41.890272       1 trace.go:205] Trace[914939944]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:23:41.117) (total time: 772ms):
	Trace[914939944]: [772.29448ms] [772.29448ms] END
	I0816 22:23:41.897880       1 trace.go:205] Trace[372773048]: "List" url:/api/v1/nodes,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/system:serviceaccount:kube-system:node-controller,client:192.168.50.226,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:41.117) (total time: 780ms):
	Trace[372773048]: ---"Listing from storage done" 773ms (22:23:00.891)
	Trace[372773048]: [780.024685ms] [780.024685ms] END
	I0816 22:23:41.899245       1 trace.go:205] Trace[189474875]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-20210816222224-6986,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.50.226,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (16-Aug-2021 22:23:41.107) (total time: 791ms):
	Trace[189474875]: ---"About to write a response" 791ms (22:23:00.899)
	Trace[189474875]: [791.769473ms] [791.769473ms] END
	I0816 22:23:41.914143       1 trace.go:205] Trace[1803257945]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (16-Aug-2021 22:23:41.101) (total time: 812ms):
	Trace[1803257945]: ---"initial value restored" 795ms (22:23:00.897)
	Trace[1803257945]: [812.099383ms] [812.099383ms] END
	I0816 22:23:46.219827       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0816 22:23:46.322056       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0816 22:23:54.101003       1 trace.go:205] Trace[1429856954]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-ctgf5,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.50.1,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:53.002) (total time: 1098ms):
	Trace[1429856954]: ---"About to write a response" 1098ms (22:23:00.100)
	Trace[1429856954]: [1.0988209s] [1.0988209s] END
	I0816 22:23:56.194218       1 client.go:360] parsed scheme: "passthrough"
	I0816 22:23:56.194943       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 22:23:56.195388       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0816 22:23:57.770900       1 trace.go:205] Trace[2103117378]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-ctgf5,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.50.1,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:57.002) (total time: 767ms):
	Trace[2103117378]: ---"About to write a response" 767ms (22:23:00.770)
	Trace[2103117378]: [767.944134ms] [767.944134ms] END
	I0816 22:24:32.818404       1 client.go:360] parsed scheme: "passthrough"
	I0816 22:24:32.818597       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 22:24:32.818691       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-apiserver [53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf] <==
	* I0816 22:24:59.052878       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0816 22:24:59.052897       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0816 22:24:59.071128       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0816 22:24:59.071704       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0816 22:24:59.072328       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0816 22:24:59.072872       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
	I0816 22:24:59.173327       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0816 22:24:59.176720       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	E0816 22:24:59.181278       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0816 22:24:59.206356       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0816 22:24:59.225165       1 cache.go:39] Caches are synced for autoregister controller
	I0816 22:24:59.227741       1 apf_controller.go:299] Running API Priority and Fairness config worker
	I0816 22:24:59.230223       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0816 22:24:59.244026       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 22:24:59.248943       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0816 22:25:00.021310       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0816 22:25:00.022052       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0816 22:25:00.034218       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0816 22:25:01.108795       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0816 22:25:01.182177       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0816 22:25:01.279321       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0816 22:25:01.344553       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 22:25:01.382891       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 22:25:11.471022       1 controller.go:611] quota admission added evaluator for: endpoints
	I0816 22:25:13.002505       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612] <==
	* 
	* ==> kube-controller-manager [825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7] <==
	* I0816 22:25:12.900492       1 shared_informer.go:247] Caches are synced for GC 
	I0816 22:25:12.900735       1 shared_informer.go:247] Caches are synced for job 
	I0816 22:25:12.908539       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0816 22:25:12.910182       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0816 22:25:12.925990       1 shared_informer.go:247] Caches are synced for stateful set 
	I0816 22:25:12.926195       1 shared_informer.go:247] Caches are synced for HPA 
	I0816 22:25:12.931999       1 shared_informer.go:247] Caches are synced for attach detach 
	I0816 22:25:12.933971       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0816 22:25:12.934151       1 shared_informer.go:247] Caches are synced for deployment 
	I0816 22:25:12.943776       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0816 22:25:12.963727       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0816 22:25:12.969209       1 shared_informer.go:247] Caches are synced for taint 
	I0816 22:25:12.969381       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	W0816 22:25:12.969524       1 node_lifecycle_controller.go:1013] Missing timestamp for Node pause-20210816222224-6986. Assuming now as a timestamp.
	I0816 22:25:12.969564       1 node_lifecycle_controller.go:1214] Controller detected that zone  is now in state Normal.
	I0816 22:25:12.970457       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0816 22:25:12.970831       1 event.go:291] "Event occurred" object="pause-20210816222224-6986" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20210816222224-6986 event: Registered Node pause-20210816222224-6986 in Controller"
	I0816 22:25:12.974749       1 shared_informer.go:247] Caches are synced for endpoint 
	I0816 22:25:13.000548       1 shared_informer.go:247] Caches are synced for disruption 
	I0816 22:25:13.000739       1 disruption.go:371] Sending events to api server.
	I0816 22:25:13.004608       1 shared_informer.go:247] Caches are synced for resource quota 
	I0816 22:25:13.016848       1 shared_informer.go:247] Caches are synced for resource quota 
	I0816 22:25:13.386564       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0816 22:25:13.386597       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0816 22:25:13.440139       1 shared_informer.go:247] Caches are synced for garbage collector 
	
	* 
	* ==> kube-controller-manager [8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4] <==
	* 
	* ==> kube-proxy [2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2] <==
	* I0816 22:25:00.641886       1 node.go:172] Successfully retrieved node IP: 192.168.50.226
	I0816 22:25:00.641938       1 server_others.go:140] Detected node IP 192.168.50.226
	W0816 22:25:00.642012       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0816 22:25:00.805515       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0816 22:25:00.805539       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0816 22:25:00.805560       1 server_others.go:212] Using iptables Proxier.
	I0816 22:25:00.806059       1 server.go:643] Version: v1.21.3
	I0816 22:25:00.807251       1 config.go:315] Starting service config controller
	I0816 22:25:00.807281       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:25:00.807307       1 config.go:224] Starting endpoint slice config controller
	I0816 22:25:00.807313       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:25:00.812511       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 22:25:00.816722       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:25:00.907844       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 22:25:00.907906       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d] <==
	* 
	* ==> kube-proxy [a8503bd796d5d979a6e1b8b5154986e8b77de391b4f091211451ea5f52808e52] <==
	* I0816 22:23:49.316430       1 node.go:172] Successfully retrieved node IP: 192.168.50.226
	I0816 22:23:49.316608       1 server_others.go:140] Detected node IP 192.168.50.226
	W0816 22:23:49.316822       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0816 22:23:49.402698       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0816 22:23:49.403462       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0816 22:23:49.404047       1 server_others.go:212] Using iptables Proxier.
	I0816 22:23:49.407950       1 server.go:643] Version: v1.21.3
	I0816 22:23:49.410864       1 config.go:315] Starting service config controller
	I0816 22:23:49.413112       1 config.go:224] Starting endpoint slice config controller
	I0816 22:23:49.419474       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:23:49.421254       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:23:49.413718       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:23:49.425958       1 shared_informer.go:247] Caches are synced for service config 
	W0816 22:23:49.425586       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:23:49.520425       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1] <==
	* I0816 22:24:54.634243       1 serving.go:347] Generated self-signed cert in-memory
	W0816 22:24:59.095457       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 22:24:59.098028       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 22:24:59.098491       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 22:24:59.098734       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 22:24:59.166481       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0816 22:24:59.178395       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:24:59.177851       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0816 22:24:59.194249       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:24:59.304036       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [8710cefecdbe5d31cd44e9ae3378bc08cbc56001326a1cb38026755196cac7d1] <==
	* E0816 22:23:21.172468       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:23:21.189536       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:23:21.300836       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.329219       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.448607       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:23:21.504104       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.504531       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:23:21.597849       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:23:21.612843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:23:21.671333       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:23:21.827198       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:23:21.852843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:23:21.867015       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.910139       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:23:23.291774       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.356078       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.452841       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:23:23.464942       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:23:23.644764       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:23:23.649142       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.710606       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.980099       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:23:24.052112       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:23:24.168543       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0816 22:23:30.043826       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a] <==
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:22:35 UTC, end at Mon 2021-08-16 22:25:42 UTC. --
	Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.919357    4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:59.020392    4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.121233    4551 kuberuntime_manager.go:1044] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.122462    4551 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.228577    4551 kubelet_node_status.go:109] "Node was previously registered" node="pause-20210816222224-6986"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.228853    4551 kubelet_node_status.go:74] "Successfully registered node" node="pause-20210816222224-6986"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.536346    4551 apiserver.go:52] "Watching apiserver"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.540959    4551 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.541581    4551 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.609734    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c0e0899-31c1-477a-a6d4-2844091deea2-kube-proxy\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.610130    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c0e0899-31c1-477a-a6d4-2844091deea2-xtables-lock\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.610271    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c0e0899-31c1-477a-a6d4-2844091deea2-lib-modules\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.610503    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2grh\" (UniqueName: \"kubernetes.io/projected/3c0e0899-31c1-477a-a6d4-2844091deea2-kube-api-access-b2grh\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.711424    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgpd2\" (UniqueName: \"kubernetes.io/projected/5aa76749-775e-423d-bbf9-680a20a27051-kube-api-access-rgpd2\") pod \"coredns-558bd4d5db-gkxhz\" (UID: \"5aa76749-775e-423d-bbf9-680a20a27051\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.712578    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5aa76749-775e-423d-bbf9-680a20a27051-config-volume\") pod \"coredns-558bd4d5db-gkxhz\" (UID: \"5aa76749-775e-423d-bbf9-680a20a27051\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.713123    4551 reconciler.go:157] "Reconciler: start to sync state"
	Aug 16 22:25:00 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:00.142816    4551 scope.go:111] "RemoveContainer" containerID="9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d"
	Aug 16 22:25:03 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:03.115940    4551 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Aug 16 22:25:11 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:11.548694    4551 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:25:11 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:11.620746    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4f138dc7-da0e-4775-b4de-b0f7d616b212-tmp\") pod \"storage-provisioner\" (UID: \"4f138dc7-da0e-4775-b4de-b0f7d616b212\") "
	Aug 16 22:25:11 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:11.621027    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7pzn\" (UniqueName: \"kubernetes.io/projected/4f138dc7-da0e-4775-b4de-b0f7d616b212-kube-api-access-n7pzn\") pod \"storage-provisioner\" (UID: \"4f138dc7-da0e-4775-b4de-b0f7d616b212\") "
	Aug 16 22:25:19 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:19.625547    4551 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 16 22:25:19 pause-20210816222224-6986 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:25:19 pause-20210816222224-6986 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:25:19 pause-20210816222224-6986 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 80 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc00032b490, 0x0)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc00032b480)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0003e1260, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0003bd400, 0x18e5530, 0xc00032a600, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0001b7d20)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0001b7d20, 0x18b3d60, 0xc0001bb8c0, 0xc00038ff01, 0xc000136600)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0001b7d20, 0x3b9aca00, 0x0, 0x48ef01, 0xc000136600)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0001b7d20, 0x3b9aca00, 0xc000136600)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:25:39.518521   11601 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	E0816 22:25:41.742273   11601 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:25:41Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log\": lstat /var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:25:41Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log\\\": lstat /var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:25:42.072535   11601 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:25:42Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log\": lstat /var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:25:42Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log\\\": lstat /var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:25:42.225369   11601 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:25:42Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log\": lstat /var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:25:42Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log\\\": lstat /var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:25:42.370081   11601 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:25:42Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log\": lstat /var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:25:42Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log\\\": lstat /var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:25:42.684572   11601 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:25:42Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log\": lstat /var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:25:42Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log\\\": lstat /var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log: no such file or directory\"\n\n** /stderr **"
	! unable to fetch logs for: describe nodes, etcd [3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423], kube-apiserver [7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612], kube-controller-manager [8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4], kube-proxy [9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d], kube-scheduler [97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a]

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816222224-6986 -n pause-20210816222224-6986
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816222224-6986 -n pause-20210816222224-6986: exit status 2 (368.866003ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210816222224-6986 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210816222224-6986 logs -n 25: exit status 110 (12.523569179s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                  |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | multinode-20210816215441-6986          | multinode-20210816215441-6986          | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:08:07 UTC | Mon, 16 Aug 2021 22:11:11 UTC |
	|         | stop                                   |                                        |         |         |                               |                               |
	| start   | -p                                     | multinode-20210816215441-6986          | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:11:11 UTC | Mon, 16 Aug 2021 22:15:19 UTC |
	|         | multinode-20210816215441-6986          |                                        |         |         |                               |                               |
	|         | --wait=true -v=8                       |                                        |         |         |                               |                               |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | --driver=kvm2                          |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| start   | -p                                     | multinode-20210816215441-6986-m03      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:19 UTC | Mon, 16 Aug 2021 22:16:20 UTC |
	|         | multinode-20210816215441-6986-m03      |                                        |         |         |                               |                               |
	|         | --driver=kvm2                          |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| delete  | -p                                     | multinode-20210816215441-6986-m03      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:16:20 UTC | Mon, 16 Aug 2021 22:16:21 UTC |
	|         | multinode-20210816215441-6986-m03      |                                        |         |         |                               |                               |
	| delete  | -p                                     | multinode-20210816215441-6986          | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:16:21 UTC | Mon, 16 Aug 2021 22:16:23 UTC |
	|         | multinode-20210816215441-6986          |                                        |         |         |                               |                               |
	| start   | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:18:07 UTC | Mon, 16 Aug 2021 22:19:45 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | --wait=true --preload=false            |                                        |         |         |                               |                               |
	|         | --driver=kvm2                          |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0           |                                        |         |         |                               |                               |
	| ssh     | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:45 UTC | Mon, 16 Aug 2021 22:19:47 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | -- sudo crictl pull busybox            |                                        |         |         |                               |                               |
	| start   | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:48 UTC | Mon, 16 Aug 2021 22:20:39 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=kvm2         |                                        |         |         |                               |                               |
	|         |  --container-runtime=containerd        |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3           |                                        |         |         |                               |                               |
	| ssh     | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:39 UTC | Mon, 16 Aug 2021 22:20:39 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | -- sudo crictl image ls                |                                        |         |         |                               |                               |
	| delete  | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:39 UTC | Mon, 16 Aug 2021 22:20:40 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	| start   | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:40 UTC | Mon, 16 Aug 2021 22:21:45 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	|         | --memory=2048 --driver=kvm2            |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| stop    | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:46 UTC | Mon, 16 Aug 2021 22:21:46 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	|         | --cancel-scheduled                     |                                        |         |         |                               |                               |
	| stop    | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:58 UTC | Mon, 16 Aug 2021 22:22:05 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	|         | --schedule 5s                          |                                        |         |         |                               |                               |
	| delete  | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:23 UTC | Mon, 16 Aug 2021 22:22:24 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	| delete  | -p kubenet-20210816222224-6986         | kubenet-20210816222224-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:22:25 UTC |
	| delete  | -p false-20210816222225-6986           | false-20210816222225-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:25 UTC | Mon, 16 Aug 2021 22:22:25 UTC |
	| start   | -p                                     | force-systemd-env-20210816222224-6986  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:24:04 UTC |
	|         | force-systemd-env-20210816222224-6986  |                                        |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | -v=5 --driver=kvm2                     |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| -p      | force-systemd-env-20210816222224-6986  | force-systemd-env-20210816222224-6986  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:04 UTC | Mon, 16 Aug 2021 22:24:04 UTC |
	|         | ssh cat /etc/containerd/config.toml    |                                        |         |         |                               |                               |
	| delete  | -p                                     | force-systemd-env-20210816222224-6986  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:04 UTC | Mon, 16 Aug 2021 22:24:05 UTC |
	|         | force-systemd-env-20210816222224-6986  |                                        |         |         |                               |                               |
	| start   | -p pause-20210816222224-6986           | pause-20210816222224-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:24:28 UTC |
	|         | --memory=2048                          |                                        |         |         |                               |                               |
	|         | --install-addons=false                 |                                        |         |         |                               |                               |
	|         | --wait=all --driver=kvm2               |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| start   | -p                                     | kubernetes-upgrade-20210816222225-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:25 UTC | Mon, 16 Aug 2021 22:24:48 UTC |
	|         | kubernetes-upgrade-20210816222225-6986 |                                        |         |         |                               |                               |
	|         | --memory=2200                          |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0           |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=kvm2   |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| stop    | -p                                     | kubernetes-upgrade-20210816222225-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:49 UTC | Mon, 16 Aug 2021 22:24:53 UTC |
	|         | kubernetes-upgrade-20210816222225-6986 |                                        |         |         |                               |                               |
	| start   | -p                                     | offline-containerd-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:25:02 UTC |
	|         | offline-containerd-20210816222224-6986 |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048   |                                        |         |         |                               |                               |
	|         | --wait=true --driver=kvm2              |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| delete  | -p                                     | offline-containerd-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:02 UTC | Mon, 16 Aug 2021 22:25:03 UTC |
	|         | offline-containerd-20210816222224-6986 |                                        |         |         |                               |                               |
	| start   | -p pause-20210816222224-6986           | pause-20210816222224-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:28 UTC | Mon, 16 Aug 2021 22:25:13 UTC |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                     |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:25:29
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:25:29.865623   11635 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:25:29.865693   11635 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:25:29.865697   11635 out.go:311] Setting ErrFile to fd 2...
	I0816 22:25:29.865700   11635 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:25:29.865823   11635 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:25:29.866117   11635 out.go:305] Setting JSON to false
	I0816 22:25:29.903940   11635 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":4092,"bootTime":1629148638,"procs":190,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:25:29.904062   11635 start.go:121] virtualization: kvm guest
	I0816 22:25:29.906708   11635 out.go:177] * [stopped-upgrade-20210816222405-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:25:29.908312   11635 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:29.906879   11635 notify.go:169] Checking for updates...
	I0816 22:25:29.909776   11635 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:25:29.911276   11635 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:25:29.811958   10879 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:25:29.817441   10879 start.go:413] Will wait 60s for crictl version
	I0816 22:25:29.817496   10879 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:25:29.853053   10879 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0816 22:25:29.853115   10879 ssh_runner.go:149] Run: containerd --version
	I0816 22:25:29.886777   10879 ssh_runner.go:149] Run: containerd --version
	I0816 22:25:29.912690   11635 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:25:29.913065   11635 config.go:177] Loaded profile config "stopped-upgrade-20210816222405-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0816 22:25:29.913582   11635 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:29.913647   11635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:29.927429   11635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45201
	I0816 22:25:29.927806   11635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:29.928407   11635 main.go:130] libmachine: Using API Version  1
	I0816 22:25:29.928426   11635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:29.928828   11635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:29.928990   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:29.930681   11635 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0816 22:25:29.930718   11635 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:25:29.931039   11635 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:29.931072   11635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:29.942479   11635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40149
	I0816 22:25:29.942868   11635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:29.943307   11635 main.go:130] libmachine: Using API Version  1
	I0816 22:25:29.943323   11635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:29.943763   11635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:29.943943   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:29.977758   11635 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 22:25:29.977793   11635 start.go:278] selected driver: kvm2
	I0816 22:25:29.977800   11635 start.go:751] validating driver "kvm2" against &{Name:stopped-upgrade-20210816222405-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.16.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:stopped-upgra
de-20210816222405-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.139 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:29.977911   11635 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 22:25:29.979236   11635 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:29.979388   11635 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 22:25:29.992607   11635 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0816 22:25:29.992893   11635 cni.go:93] Creating CNI manager for ""
	I0816 22:25:29.992910   11635 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:25:29.992918   11635 start_flags.go:277] config:
	{Name:stopped-upgrade-20210816222405-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.16.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:stopped-upgrade-20210816222405-6986 Namespace:default APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.139 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:29.993026   11635 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:29.994994   11635 out.go:177] * Starting control plane node stopped-upgrade-20210816222405-6986 in cluster stopped-upgrade-20210816222405-6986
	I0816 22:25:29.995015   11635 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	W0816 22:25:30.055078   11635 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.20.0-containerd-overlay2-amd64.tar.lz4 status code: 404
	I0816 22:25:30.055326   11635 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/stopped-upgrade-20210816222405-6986/config.json ...
	I0816 22:25:30.055447   11635 cache.go:108] acquiring lock: {Name:mk44a899d3e13d1e1a41236ca93bfa4c540d90ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055469   11635 cache.go:108] acquiring lock: {Name:mk54690e8b8165106a936f57493f4a5f28a2f038 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055479   11635 cache.go:108] acquiring lock: {Name:mk5fa6434b4b67f17fb247c2a1febaaba95afc21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055522   11635 cache.go:108] acquiring lock: {Name:mk248d78835e4d0dd7deebdde93e709059900376 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055596   11635 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:25:30.055621   11635 start.go:313] acquiring machines lock for stopped-upgrade-20210816222405-6986: {Name:mk808edd60d1305a42bb85791729eff4573dbb15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 22:25:30.055639   11635 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.20.0
	I0816 22:25:30.055653   11635 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.20.0
	I0816 22:25:30.055661   11635 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.20.0
	I0816 22:25:30.055667   11635 cache.go:108] acquiring lock: {Name:mk74554b1ad079cff2e1d01801c80cf158c3d0db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055449   11635 cache.go:108] acquiring lock: {Name:mk21e396b81c69d7c5a1e31157ecfaad7d142ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055784   11635 image.go:133] retrieving image: k8s.gcr.io/coredns:1.7.0
	I0816 22:25:30.055784   11635 cache.go:108] acquiring lock: {Name:mk4f799cc9e7aed39d4f75ef9ab783b3653bcaec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055644   11635 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0816 22:25:30.055857   11635 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.20.0
	I0816 22:25:30.055838   11635 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 404.352µs
	I0816 22:25:30.055869   11635 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0816 22:25:30.055492   11635 cache.go:108] acquiring lock: {Name:mk35426dc160f9577622987ee511aee9c6194a42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055863   11635 cache.go:108] acquiring lock: {Name:mk7034d6c48699d6f6387acd67a2f9aff6580cde Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055899   11635 cache.go:108] acquiring lock: {Name:mk83079d57b39c409d978da9d87f61040fa2879d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055991   11635 image.go:133] retrieving image: k8s.gcr.io/pause:3.2
	I0816 22:25:30.055993   11635 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0816 22:25:30.056005   11635 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0816 22:25:30.056013   11635 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 117.345µs
	I0816 22:25:30.056023   11635 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 216.257µs
	I0816 22:25:30.056038   11635 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0816 22:25:30.056041   11635 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0816 22:25:30.056058   11635 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.13-0
	I0816 22:25:30.069442   11635 image.go:171] found k8s.gcr.io/pause:3.2 locally: &{UncompressedImageCore:0xc0015120b8 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:30.069470   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2
	I0816 22:25:30.125132   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists
	I0816 22:25:30.125180   11635 cache.go:97] cache image "k8s.gcr.io/pause:3.2" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 69.39782ms
	I0816 22:25:30.125195   11635 cache.go:81] save to tar file k8s.gcr.io/pause:3.2 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded
	I0816 22:25:30.603136   11635 image.go:171] found k8s.gcr.io/coredns:1.7.0 locally: &{UncompressedImageCore:0xc0001b80d0 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:30.603171   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0
	I0816 22:25:30.944848   11635 start.go:317] acquired machines lock for "stopped-upgrade-20210816222405-6986" in 889.202815ms
	I0816 22:25:30.944895   11635 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:25:30.944913   11635 fix.go:55] fixHost starting: 
	I0816 22:25:30.945363   11635 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:30.945411   11635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:30.958997   11635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43851
	I0816 22:25:30.959558   11635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:30.960174   11635 main.go:130] libmachine: Using API Version  1
	I0816 22:25:30.960191   11635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:30.960618   11635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:30.961025   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:30.961188   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetState
	I0816 22:25:30.964772   11635 fix.go:108] recreateIfNeeded on stopped-upgrade-20210816222405-6986: state=Stopped err=<nil>
	I0816 22:25:30.964804   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	W0816 22:25:30.964947   11635 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:25:29.920921   10879 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on containerd 1.4.9 ...
	I0816 22:25:29.920959   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetIP
	I0816 22:25:29.926612   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:29.927031   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:29.927059   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:29.927194   10879 ssh_runner.go:149] Run: grep 192.168.116.1	host.minikube.internal$ /etc/hosts
	I0816 22:25:29.931935   10879 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.116.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:29.945360   10879 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 22:25:29.945419   10879 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:29.980071   10879 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:25:29.980092   10879 containerd.go:517] Images already preloaded, skipping extraction
	I0816 22:25:29.980136   10879 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:30.014510   10879 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:25:30.014534   10879 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:25:30.014596   10879 ssh_runner.go:149] Run: sudo crictl info
	I0816 22:25:30.049631   10879 cni.go:93] Creating CNI manager for ""
	I0816 22:25:30.049653   10879 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:25:30.049662   10879 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:25:30.049673   10879 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.116.91 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20210816222225-6986 NodeName:kubernetes-upgrade-20210816222225-6986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.116.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.116.91 Cgro
upDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:25:30.049804   10879 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.116.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "kubernetes-upgrade-20210816222225-6986"
	  kubeletExtraArgs:
	    node-ip: 192.168.116.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.116.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:25:30.049880   10879 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-20210816222225-6986 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.116.91 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210816222225-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:25:30.049928   10879 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0816 22:25:30.057977   10879 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:25:30.058039   10879 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:25:30.069033   10879 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (559 bytes)
	I0816 22:25:30.084505   10879 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0816 22:25:30.099259   10879 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0816 22:25:30.117789   10879 ssh_runner.go:149] Run: grep 192.168.116.91	control-plane.minikube.internal$ /etc/hosts
	I0816 22:25:30.123120   10879 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.116.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:30.137790   10879 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986 for IP: 192.168.116.91
	I0816 22:25:30.137839   10879 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:25:30.137860   10879 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:25:30.137924   10879 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/client.key
	I0816 22:25:30.137959   10879 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/apiserver.key.0bcaee26
	I0816 22:25:30.137982   10879 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/proxy-client.key
	I0816 22:25:30.138107   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem (1338 bytes)
	W0816 22:25:30.138164   10879 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
	I0816 22:25:30.138180   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 22:25:30.138217   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:25:30.138260   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:25:30.138286   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1675 bytes)
	I0816 22:25:30.138335   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:25:30.139672   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:25:30.162501   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 22:25:30.187988   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:25:30.208990   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 22:25:30.232139   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:25:30.260638   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 22:25:30.287352   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:25:30.316722   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 22:25:30.346450   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
	I0816 22:25:30.369471   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
	I0816 22:25:30.397914   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:25:30.433581   10879 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:25:30.454657   10879 ssh_runner.go:149] Run: openssl version
	I0816 22:25:30.463107   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
	I0816 22:25:30.475906   10879 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6986.pem
	I0816 22:25:30.482988   10879 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:49 /usr/share/ca-certificates/6986.pem
	I0816 22:25:30.483059   10879 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
	I0816 22:25:30.492438   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
	I0816 22:25:30.507780   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
	I0816 22:25:30.522931   10879 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/69862.pem
	I0816 22:25:30.529692   10879 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:49 /usr/share/ca-certificates/69862.pem
	I0816 22:25:30.529753   10879 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
	I0816 22:25:30.540308   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:25:30.555395   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:25:30.571425   10879 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:30.577689   10879 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:42 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:30.577743   10879 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:30.588310   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:25:30.599536   10879 kubeadm.go:390] StartCluster: {Name:kubernetes-upgrade-20210816222225-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210816222225-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.116.91 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:30.599646   10879 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0816 22:25:30.599708   10879 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:25:30.645578   10879 cri.go:76] found id: ""
	I0816 22:25:30.645662   10879 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:25:30.656753   10879 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:25:30.656775   10879 kubeadm.go:600] restartCluster start
	I0816 22:25:30.656823   10879 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:25:30.665356   10879 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:30.666456   10879 kubeconfig.go:117] verify returned: extract IP: "kubernetes-upgrade-20210816222225-6986" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:30.666789   10879 kubeconfig.go:128] "kubernetes-upgrade-20210816222225-6986" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:25:30.667454   10879 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:25:30.668301   10879 kapi.go:59] client config for kubernetes-upgrade-20210816222225-6986: &rest.Config{Host:"https://192.168.116.91:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/k
ubernetes-upgrade-20210816222225-6986/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 22:25:30.670154   10879 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:25:30.680113   10879 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta2
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.116.91
	@@ -17,7 +17,7 @@
	     node-ip: 192.168.116.91
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta2
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.116.91"]
	@@ -31,7 +31,7 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-20210816222225-6986
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 dns:
	   type: CoreDNS
	@@ -39,8 +39,8 @@
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.116.91:2381
	-kubernetesVersion: v1.14.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.22.0-rc.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0816 22:25:30.680129   10879 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:25:30.680144   10879 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:25:30.680191   10879 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:25:30.726916   10879 cri.go:76] found id: ""
	I0816 22:25:30.726997   10879 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:25:30.746591   10879 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:25:30.758779   10879 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:25:30.758838   10879 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:25:30.769229   10879 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:25:30.769260   10879 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:30.999868   10879 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:32.699195   10879 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.699296673s)
	I0816 22:25:32.699238   10879 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:33.156614   10879 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:33.351483   10879 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:33.492135   10879 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:25:33.492214   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:34.016071   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:30.969443   11635 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-20210816222405-6986" ...
	I0816 22:25:30.969474   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .Start
	I0816 22:25:30.969661   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Ensuring networks are active...
	I0816 22:25:30.972266   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Ensuring network default is active
	I0816 22:25:30.972626   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Ensuring network minikube-net is active
	I0816 22:25:30.973378   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Getting domain xml...
	I0816 22:25:30.975969   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Creating domain...
	I0816 22:25:31.337824   11635 image.go:171] found k8s.gcr.io/kube-scheduler:v1.20.0 locally: &{UncompressedImageCore:0xc0005f0060 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:31.337868   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0
	I0816 22:25:31.505137   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Waiting to get IP...
	I0816 22:25:31.505762   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:31.507077   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has current primary IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:31.507133   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Found IP for machine: 192.168.94.139
	I0816 22:25:31.507148   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Reserving static IP address...
	I0816 22:25:31.507652   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "stopped-upgrade-20210816222405-6986", mac: "52:54:00:48:08:ec", ip: "192.168.94.139"} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:24:28 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:31.507684   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-20210816222405-6986", mac: "52:54:00:48:08:ec", ip: "192.168.94.139"}
	I0816 22:25:31.507704   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | Getting to WaitForSSH function...
	I0816 22:25:31.507722   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Reserved static IP address: 192.168.94.139
	I0816 22:25:31.507732   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Waiting for SSH to be available...
	I0816 22:25:31.513971   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:31.514423   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:24:28 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:31.514456   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:31.514613   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | Using SSH client type: external
	I0816 22:25:31.514740   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816222405-6986/id_rsa (-rw-------)
	I0816 22:25:31.514792   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.94.139 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816222405-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:25:31.514855   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | About to run SSH command:
	I0816 22:25:31.514871   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | exit 0
	I0816 22:25:32.653039   11635 image.go:171] found k8s.gcr.io/kube-controller-manager:v1.20.0 locally: &{UncompressedImageCore:0xc000114030 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:32.653105   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0
	I0816 22:25:32.791720   11635 image.go:171] found k8s.gcr.io/kube-apiserver:v1.20.0 locally: &{UncompressedImageCore:0xc001512008 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:32.791795   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0
	I0816 22:25:34.107008   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 exists
	I0816 22:25:34.107061   11635 cache.go:97] cache image "k8s.gcr.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0" took 4.051394985s
	I0816 22:25:34.107085   11635 cache.go:81] save to tar file k8s.gcr.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 succeeded
	I0816 22:25:34.315350   11635 image.go:171] found k8s.gcr.io/etcd:3.4.13-0 locally: &{UncompressedImageCore:0xc000114050 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:34.315401   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0
	I0816 22:25:34.340626   11635 image.go:171] found k8s.gcr.io/kube-proxy:v1.20.0 locally: &{UncompressedImageCore:0xc000010208 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:34.340673   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0
	I0816 22:25:34.809104   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 exists
	I0816 22:25:34.809158   11635 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0" took 4.753722894s
	I0816 22:25:34.809180   11635 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 succeeded
	I0816 22:25:34.516083   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:35.015857   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:35.519104   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:36.016162   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:36.515658   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:37.015674   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:37.515739   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:38.015673   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:38.515650   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:39.015668   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	f04c445038901       6e38f40d628db       31 seconds ago       Exited              storage-provisioner       0                   0e10d9862204b
	e70dd80568a0a       296a6d5035e2d       42 seconds ago       Running             coredns                   1                   c649190b7c07d
	2585772c8a261       adb2816ea823a       44 seconds ago       Running             kube-proxy                2                   d73b4cafe25f0
	53780b2759956       3d174f00aa39e       51 seconds ago       Running             kube-apiserver            2                   fb9f201b2c2e1
	76fef890edebe       6be0dc1302e30       51 seconds ago       Running             kube-scheduler            2                   1718d2a0276ce
	69a7fab4848c4       0369cf4303ffd       51 seconds ago       Running             etcd                      2                   3b9459ff3a0d8
	825e79d62718c       bc2bb319a7038       51 seconds ago       Running             kube-controller-manager   2                   feab707eb735a
	7626b842ef886       3d174f00aa39e       51 seconds ago       Created             kube-apiserver            1                   fb9f201b2c2e1
	9d9f34b35e099       adb2816ea823a       51 seconds ago       Created             kube-proxy                1                   d73b4cafe25f0
	97c4cc3614116       6be0dc1302e30       51 seconds ago       Created             kube-scheduler            1                   1718d2a0276ce
	3644e35e40a2f       0369cf4303ffd       51 seconds ago       Created             etcd                      1                   3b9459ff3a0d8
	8c5f2c007cff4       bc2bb319a7038       55 seconds ago       Created             kube-controller-manager   1                   feab707eb735a
	28c7161cd49a4       296a6d5035e2d       About a minute ago   Exited              coredns                   0                   05c2427240818
	a8503bd796d5d       adb2816ea823a       About a minute ago   Exited              kube-proxy                0                   a86c3b6ee3a70
	124fa393359f7       0369cf4303ffd       2 minutes ago        Exited              etcd                      0                   94a493a65b593
	8710cefecdbe5       6be0dc1302e30       2 minutes ago        Exited              kube-scheduler            0                   982e66890a90d
	38dc61b214a9c       3d174f00aa39e       2 minutes ago        Exited              kube-apiserver            0                   630ed9d4644e9
	
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2021-08-16 22:22:35 UTC, end at Mon 2021-08-16 22:25:44 UTC. --
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.233198734Z" level=info msg="StartContainer for \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.443953465Z" level=info msg="StopPodSandbox for \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.444769081Z" level=info msg="Container to stop \"28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.461877463Z" level=info msg="StartContainer for \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\" returns successfully"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536079353Z" level=info msg="TearDown network for sandbox \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\" successfully"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536191167Z" level=info msg="StopPodSandbox for \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\" returns successfully"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536962082Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-gkxhz,Uid:5aa76749-775e-423d-bbf9-680a20a27051,Namespace:kube-system,Attempt:1,}"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.776744568Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb pid=5007
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.290447333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-gkxhz,Uid:5aa76749-775e-423d-bbf9-680a20a27051,Namespace:kube-system,Attempt:1,} returns sandbox id \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\""
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.300600113Z" level=info msg="CreateContainer within sandbox \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.389478760Z" level=info msg="CreateContainer within sandbox \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\""
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.397162604Z" level=info msg="StartContainer for \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\""
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.594046909Z" level=info msg="StartContainer for \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\" returns successfully"
	Aug 16 22:25:11 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:11.852957632Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:4f138dc7-da0e-4775-b4de-b0f7d616b212,Namespace:kube-system,Attempt:0,}"
	Aug 16 22:25:11 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:11.903771908Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c pid=5174
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.439549893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:4f138dc7-da0e-4775-b4de-b0f7d616b212,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\""
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.451930506Z" level=info msg="CreateContainer within sandbox \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.521875733Z" level=info msg="CreateContainer within sandbox \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.523292924Z" level=info msg="StartContainer for \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.851898064Z" level=info msg="StartContainer for \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\" returns successfully"
	Aug 16 22:25:32 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:32.698376142Z" level=info msg="Finish piping stderr of container \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
	Aug 16 22:25:32 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:32.700549219Z" level=info msg="Finish piping stdout of container \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
	Aug 16 22:25:32 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:32.702928257Z" level=info msg="TaskExit event &TaskExit{ContainerID:f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a,ID:f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a,Pid:5251,ExitStatus:255,ExitedAt:2021-08-16 22:25:32.702245647 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:25:32 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:32.834950240Z" level=info msg="shim disconnected" id=f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a
	Aug 16 22:25:32 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:32.835568670Z" level=error msg="copy shim log" error="read /proc/self/fd/118: file already closed"
	
	* 
	* ==> coredns [28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	I0816 22:24:19.170128       1 trace.go:205] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.168) (total time: 30001ms):
	Trace[2019727887]: [30.001909435s] [30.001909435s] END
	E0816 22:24:19.170279       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0816 22:24:19.171047       1 trace.go:205] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.166) (total time: 30004ms):
	Trace[939984059]: [30.004733433s] [30.004733433s] END
	E0816 22:24:19.171149       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0816 22:24:19.171258       1 trace.go:205] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.166) (total time: 30004ms):
	Trace[911902081]: [30.004945736s] [30.004945736s] END
	E0816 22:24:19.171265       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> coredns [e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7ae91e86dd75dee9ae501cb58003198b
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.985023] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1731 comm=systemd-network
	[  +1.088197] vboxguest: loading out-of-tree module taints kernel.
	[  +0.006251] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.889854] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[ +16.286436] systemd-fstab-generator[2098]: Ignoring "noauto" for root device
	[  +0.258185] systemd-fstab-generator[2128]: Ignoring "noauto" for root device
	[  +0.135377] systemd-fstab-generator[2143]: Ignoring "noauto" for root device
	[  +0.180446] systemd-fstab-generator[2173]: Ignoring "noauto" for root device
	[Aug16 22:23] systemd-fstab-generator[2381]: Ignoring "noauto" for root device
	[ +20.504547] systemd-fstab-generator[2808]: Ignoring "noauto" for root device
	[ +20.717915] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.551219] kauditd_printk_skb: 104 callbacks suppressed
	[Aug16 22:24] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.792051] systemd-fstab-generator[3754]: Ignoring "noauto" for root device
	[  +0.176916] systemd-fstab-generator[3767]: Ignoring "noauto" for root device
	[  +0.230657] systemd-fstab-generator[3792]: Ignoring "noauto" for root device
	[  +4.083098] kauditd_printk_skb: 2 callbacks suppressed
	[  +3.840195] NFSD: Unable to end grace period: -110
	[  +4.324119] systemd-fstab-generator[4543]: Ignoring "noauto" for root device
	[  +6.680726] kauditd_printk_skb: 29 callbacks suppressed
	[Aug16 22:25] kauditd_printk_skb: 14 callbacks suppressed
	[ +12.641213] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.313576] systemd-fstab-generator[5666]: Ignoring "noauto" for root device
	[  +0.847782] systemd-fstab-generator[5723]: Ignoring "noauto" for root device
	[  +1.051927] systemd-fstab-generator[5775]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [124fa393359f758ea47161b345d2cab4ce486d4473a4caad483449464d44315f] <==
	* 2021-08-16 22:23:41.064197 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210816222224-6986\" " with result "range_response_count:1 size:5052" took too long (6.421187445s) to execute
	2021-08-16 22:23:41.065847 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:341" took too long (6.446897155s) to execute
	2021-08-16 22:23:41.066285 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/node-controller\" " with result "range_response_count:1 size:242" took too long (5.09674902s) to execute
	2021-08-16 22:23:41.068005 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (6.28196539s) to execute
	2021-08-16 22:23:41.068259 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (763.710719ms) to execute
	2021-08-16 22:23:41.880435 W | etcdserver: read-only range request "key:\"/registry/masterleases/192.168.50.226\" " with result "range_response_count:0 size:5" took too long (776.335267ms) to execute
	2021-08-16 22:23:41.881080 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (597.366064ms) to execute
	2021-08-16 22:23:41.882354 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:4569" took too long (763.841142ms) to execute
	2021-08-16 22:23:41.883287 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (621.677263ms) to execute
	2021-08-16 22:23:41.884722 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (481.499599ms) to execute
	2021-08-16 22:23:41.885189 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-pause-20210816222224-6986\" " with result "range_response_count:1 size:5421" took too long (772.180278ms) to execute
	2021-08-16 22:23:42.453217 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (290.061418ms) to execute
	2021-08-16 22:23:42.455427 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (285.893643ms) to execute
	2021-08-16 22:23:42.456943 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-pause-20210816222224-6986\" " with result "range_response_count:1 size:6314" took too long (153.946258ms) to execute
	2021-08-16 22:23:42.458024 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (177.825431ms) to execute
	2021-08-16 22:23:44.267832 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:23:54.092150 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (701.802797ms) to execute
	2021-08-16 22:23:54.093518 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-ctgf5\" " with result "range_response_count:1 size:4473" took too long (1.090386256s) to execute
	2021-08-16 22:23:54.267392 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:23:57.768234 W | etcdserver: request "header:<ID:4263355585347158035 > lease_revoke:<id:3b2a7b510fcb7e67>" with result "size:29" took too long (771.90226ms) to execute
	2021-08-16 22:23:57.768903 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (374.444829ms) to execute
	2021-08-16 22:23:57.769379 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-ctgf5\" " with result "range_response_count:1 size:4473" took too long (765.115046ms) to execute
	2021-08-16 22:24:04.267548 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:24:14.267958 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:24:24.268321 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> etcd [3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423] <==
	* 
	* ==> etcd [69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549] <==
	* 2021-08-16 22:24:53.773065 W | auth: simple token is not cryptographically signed
	2021-08-16 22:24:53.837118 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	raft2021/08/16 22:24:53 INFO: e840193bf29c3b2a switched to configuration voters=(16735403960572853034)
	2021-08-16 22:24:53.849298 I | etcdserver/membership: added member e840193bf29c3b2a [https://192.168.50.226:2380] to cluster 99b90e1bea73c730
	2021-08-16 22:24:53.860198 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-16 22:24:53.864997 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-16 22:24:53.865214 I | embed: listening for peers on 192.168.50.226:2380
	2021-08-16 22:24:53.868083 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-16 22:24:53.871735 I | etcdserver/api: enabled capabilities for version 3.4
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a is starting a new election at term 2
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a became candidate at term 3
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a received MsgVoteResp from e840193bf29c3b2a at term 3
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a became leader at term 3
	raft2021/08/16 22:24:54 INFO: raft.node: e840193bf29c3b2a elected leader e840193bf29c3b2a at term 3
	2021-08-16 22:24:54.968820 I | embed: ready to serve client requests
	2021-08-16 22:24:54.969394 I | etcdserver: published {Name:pause-20210816222224-6986 ClientURLs:[https://192.168.50.226:2379]} to cluster 99b90e1bea73c730
	2021-08-16 22:24:54.971284 I | embed: serving client requests on 192.168.50.226:2379
	2021-08-16 22:24:54.971462 I | embed: ready to serve client requests
	2021-08-16 22:24:54.973508 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-16 22:25:03.067902 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-gkxhz\" " with result "range_response_count:1 size:4860" took too long (140.807991ms) to execute
	2021-08-16 22:25:06.747736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:25:08.138740 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:25:10.645123 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:3838" took too long (108.124514ms) to execute
	2021-08-16 22:25:10.645989 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:665" took too long (107.967343ms) to execute
	2021-08-16 22:25:18.137756 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  22:25:54 up 3 min,  0 users,  load average: 1.93, 1.42, 0.59
	Linux pause-20210816222224-6986 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [38dc61b214a9cbd019de4ca9ab52fb6baf728336de6d715df22b027522ad8b20] <==
	* I0816 22:23:41.890272       1 trace.go:205] Trace[914939944]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:23:41.117) (total time: 772ms):
	Trace[914939944]: [772.29448ms] [772.29448ms] END
	I0816 22:23:41.897880       1 trace.go:205] Trace[372773048]: "List" url:/api/v1/nodes,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/system:serviceaccount:kube-system:node-controller,client:192.168.50.226,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:41.117) (total time: 780ms):
	Trace[372773048]: ---"Listing from storage done" 773ms (22:23:00.891)
	Trace[372773048]: [780.024685ms] [780.024685ms] END
	I0816 22:23:41.899245       1 trace.go:205] Trace[189474875]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-20210816222224-6986,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.50.226,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (16-Aug-2021 22:23:41.107) (total time: 791ms):
	Trace[189474875]: ---"About to write a response" 791ms (22:23:00.899)
	Trace[189474875]: [791.769473ms] [791.769473ms] END
	I0816 22:23:41.914143       1 trace.go:205] Trace[1803257945]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (16-Aug-2021 22:23:41.101) (total time: 812ms):
	Trace[1803257945]: ---"initial value restored" 795ms (22:23:00.897)
	Trace[1803257945]: [812.099383ms] [812.099383ms] END
	I0816 22:23:46.219827       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0816 22:23:46.322056       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0816 22:23:54.101003       1 trace.go:205] Trace[1429856954]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-ctgf5,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.50.1,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:53.002) (total time: 1098ms):
	Trace[1429856954]: ---"About to write a response" 1098ms (22:23:00.100)
	Trace[1429856954]: [1.0988209s] [1.0988209s] END
	I0816 22:23:56.194218       1 client.go:360] parsed scheme: "passthrough"
	I0816 22:23:56.194943       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 22:23:56.195388       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0816 22:23:57.770900       1 trace.go:205] Trace[2103117378]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-ctgf5,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.50.1,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:57.002) (total time: 767ms):
	Trace[2103117378]: ---"About to write a response" 767ms (22:23:00.770)
	Trace[2103117378]: [767.944134ms] [767.944134ms] END
	I0816 22:24:32.818404       1 client.go:360] parsed scheme: "passthrough"
	I0816 22:24:32.818597       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 22:24:32.818691       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-apiserver [53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf] <==
	* I0816 22:24:59.052878       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0816 22:24:59.052897       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0816 22:24:59.071128       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0816 22:24:59.071704       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0816 22:24:59.072328       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0816 22:24:59.072872       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
	I0816 22:24:59.173327       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0816 22:24:59.176720       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	E0816 22:24:59.181278       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0816 22:24:59.206356       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0816 22:24:59.225165       1 cache.go:39] Caches are synced for autoregister controller
	I0816 22:24:59.227741       1 apf_controller.go:299] Running API Priority and Fairness config worker
	I0816 22:24:59.230223       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0816 22:24:59.244026       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 22:24:59.248943       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0816 22:25:00.021310       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0816 22:25:00.022052       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0816 22:25:00.034218       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0816 22:25:01.108795       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0816 22:25:01.182177       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0816 22:25:01.279321       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0816 22:25:01.344553       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 22:25:01.382891       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 22:25:11.471022       1 controller.go:611] quota admission added evaluator for: endpoints
	I0816 22:25:13.002505       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612] <==
	* 
	* ==> kube-controller-manager [825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7] <==
	* I0816 22:25:12.900492       1 shared_informer.go:247] Caches are synced for GC 
	I0816 22:25:12.900735       1 shared_informer.go:247] Caches are synced for job 
	I0816 22:25:12.908539       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0816 22:25:12.910182       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0816 22:25:12.925990       1 shared_informer.go:247] Caches are synced for stateful set 
	I0816 22:25:12.926195       1 shared_informer.go:247] Caches are synced for HPA 
	I0816 22:25:12.931999       1 shared_informer.go:247] Caches are synced for attach detach 
	I0816 22:25:12.933971       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0816 22:25:12.934151       1 shared_informer.go:247] Caches are synced for deployment 
	I0816 22:25:12.943776       1 shared_informer.go:247] Caches are synced for ephemeral 
	I0816 22:25:12.963727       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I0816 22:25:12.969209       1 shared_informer.go:247] Caches are synced for taint 
	I0816 22:25:12.969381       1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone: 
	W0816 22:25:12.969524       1 node_lifecycle_controller.go:1013] Missing timestamp for Node pause-20210816222224-6986. Assuming now as a timestamp.
	I0816 22:25:12.969564       1 node_lifecycle_controller.go:1214] Controller detected that zone  is now in state Normal.
	I0816 22:25:12.970457       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0816 22:25:12.970831       1 event.go:291] "Event occurred" object="pause-20210816222224-6986" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-20210816222224-6986 event: Registered Node pause-20210816222224-6986 in Controller"
	I0816 22:25:12.974749       1 shared_informer.go:247] Caches are synced for endpoint 
	I0816 22:25:13.000548       1 shared_informer.go:247] Caches are synced for disruption 
	I0816 22:25:13.000739       1 disruption.go:371] Sending events to api server.
	I0816 22:25:13.004608       1 shared_informer.go:247] Caches are synced for resource quota 
	I0816 22:25:13.016848       1 shared_informer.go:247] Caches are synced for resource quota 
	I0816 22:25:13.386564       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0816 22:25:13.386597       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0816 22:25:13.440139       1 shared_informer.go:247] Caches are synced for garbage collector 
	
	* 
	* ==> kube-controller-manager [8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4] <==
	* 
	* ==> kube-proxy [2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2] <==
	* I0816 22:25:00.641886       1 node.go:172] Successfully retrieved node IP: 192.168.50.226
	I0816 22:25:00.641938       1 server_others.go:140] Detected node IP 192.168.50.226
	W0816 22:25:00.642012       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0816 22:25:00.805515       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0816 22:25:00.805539       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0816 22:25:00.805560       1 server_others.go:212] Using iptables Proxier.
	I0816 22:25:00.806059       1 server.go:643] Version: v1.21.3
	I0816 22:25:00.807251       1 config.go:315] Starting service config controller
	I0816 22:25:00.807281       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:25:00.807307       1 config.go:224] Starting endpoint slice config controller
	I0816 22:25:00.807313       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:25:00.812511       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 22:25:00.816722       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:25:00.907844       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 22:25:00.907906       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d] <==
	* 
	* ==> kube-proxy [a8503bd796d5d979a6e1b8b5154986e8b77de391b4f091211451ea5f52808e52] <==
	* I0816 22:23:49.316430       1 node.go:172] Successfully retrieved node IP: 192.168.50.226
	I0816 22:23:49.316608       1 server_others.go:140] Detected node IP 192.168.50.226
	W0816 22:23:49.316822       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0816 22:23:49.402698       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0816 22:23:49.403462       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0816 22:23:49.404047       1 server_others.go:212] Using iptables Proxier.
	I0816 22:23:49.407950       1 server.go:643] Version: v1.21.3
	I0816 22:23:49.410864       1 config.go:315] Starting service config controller
	I0816 22:23:49.413112       1 config.go:224] Starting endpoint slice config controller
	I0816 22:23:49.419474       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:23:49.421254       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:23:49.413718       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:23:49.425958       1 shared_informer.go:247] Caches are synced for service config 
	W0816 22:23:49.425586       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:23:49.520425       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1] <==
	* I0816 22:24:54.634243       1 serving.go:347] Generated self-signed cert in-memory
	W0816 22:24:59.095457       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0816 22:24:59.098028       1 authentication.go:337] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0816 22:24:59.098491       1 authentication.go:338] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0816 22:24:59.098734       1 authentication.go:339] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0816 22:24:59.166481       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0816 22:24:59.178395       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:24:59.177851       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0816 22:24:59.194249       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:24:59.304036       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	W0816 22:25:44.233423       1 reflector.go:436] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	* 
	* ==> kube-scheduler [8710cefecdbe5d31cd44e9ae3378bc08cbc56001326a1cb38026755196cac7d1] <==
	* E0816 22:23:21.172468       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:23:21.189536       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:23:21.300836       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.329219       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.448607       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:23:21.504104       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.504531       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:23:21.597849       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:23:21.612843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:23:21.671333       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:23:21.827198       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:23:21.852843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:23:21.867015       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.910139       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:23:23.291774       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.356078       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.452841       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:23:23.464942       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:23:23.644764       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:23:23.649142       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.710606       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.980099       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:23:24.052112       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:23:24.168543       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0816 22:23:30.043826       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a] <==
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:22:35 UTC, end at Mon 2021-08-16 22:25:55 UTC. --
	Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.919357    4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:59.020392    4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.121233    4551 kuberuntime_manager.go:1044] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.122462    4551 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.228577    4551 kubelet_node_status.go:109] "Node was previously registered" node="pause-20210816222224-6986"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.228853    4551 kubelet_node_status.go:74] "Successfully registered node" node="pause-20210816222224-6986"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.536346    4551 apiserver.go:52] "Watching apiserver"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.540959    4551 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.541581    4551 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.609734    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c0e0899-31c1-477a-a6d4-2844091deea2-kube-proxy\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.610130    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c0e0899-31c1-477a-a6d4-2844091deea2-xtables-lock\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.610271    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c0e0899-31c1-477a-a6d4-2844091deea2-lib-modules\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.610503    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2grh\" (UniqueName: \"kubernetes.io/projected/3c0e0899-31c1-477a-a6d4-2844091deea2-kube-api-access-b2grh\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.711424    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgpd2\" (UniqueName: \"kubernetes.io/projected/5aa76749-775e-423d-bbf9-680a20a27051-kube-api-access-rgpd2\") pod \"coredns-558bd4d5db-gkxhz\" (UID: \"5aa76749-775e-423d-bbf9-680a20a27051\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.712578    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5aa76749-775e-423d-bbf9-680a20a27051-config-volume\") pod \"coredns-558bd4d5db-gkxhz\" (UID: \"5aa76749-775e-423d-bbf9-680a20a27051\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.713123    4551 reconciler.go:157] "Reconciler: start to sync state"
	Aug 16 22:25:00 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:00.142816    4551 scope.go:111] "RemoveContainer" containerID="9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d"
	Aug 16 22:25:03 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:03.115940    4551 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Aug 16 22:25:11 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:11.548694    4551 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:25:11 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:11.620746    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4f138dc7-da0e-4775-b4de-b0f7d616b212-tmp\") pod \"storage-provisioner\" (UID: \"4f138dc7-da0e-4775-b4de-b0f7d616b212\") "
	Aug 16 22:25:11 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:11.621027    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7pzn\" (UniqueName: \"kubernetes.io/projected/4f138dc7-da0e-4775-b4de-b0f7d616b212-kube-api-access-n7pzn\") pod \"storage-provisioner\" (UID: \"4f138dc7-da0e-4775-b4de-b0f7d616b212\") "
	Aug 16 22:25:19 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:19.625547    4551 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 16 22:25:19 pause-20210816222224-6986 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:25:19 pause-20210816222224-6986 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:25:19 pause-20210816222224-6986 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 80 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc00032b490, 0x0)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc00032b480)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0003e1260, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0003bd400, 0x18e5530, 0xc00032a600, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0001b7d20)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0001b7d20, 0x18b3d60, 0xc0001bb8c0, 0xc00038ff01, 0xc000136600)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0001b7d20, 0x3b9aca00, 0x0, 0x48ef01, 0xc000136600)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0001b7d20, 0x3b9aca00, 0xc000136600)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:25:54.584879   11774 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	E0816 22:25:54.756523   11774 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:25:54Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log\": lstat /var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:25:54Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log\\\": lstat /var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:25:55.078753   11774 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:25:55Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log\": lstat /var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:25:55Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log\\\": lstat /var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:25:55.305682   11774 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:25:55Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log\": lstat /var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:25:55Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log\\\": lstat /var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:25:55.425515   11774 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:25:55Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log\": lstat /var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:25:55Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log\\\": lstat /var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:25:55.640888   11774 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:25:55Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log\": lstat /var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:25:55Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log\\\": lstat /var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log: no such file or directory\"\n\n** /stderr **"
	! unable to fetch logs for: describe nodes, etcd [3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423], kube-apiserver [7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612], kube-controller-manager [8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4], kube-proxy [9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d], kube-scheduler [97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a]

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestPause/serial/Pause (36.64s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (13.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20210816222224-6986 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20210816222224-6986 --output=json --layout=cluster: exit status 2 (479.430455ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20210816222224-6986","StatusCode":101,"StatusName":"Pausing","Step":"Pausing","StepDetail":"* Pausing node pause-20210816222224-6986 ...","BinaryVersion":"v1.22.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210816222224-6986","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:25:56.244106   11810 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E0816 22:25:56.244142   11810 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax
	E0816 22:25:56.244174   11810 status.go:602] exit code not found: strconv.Atoi: parsing "": invalid syntax

                                                
                                                
** /stderr **
pause_test.go:190: incorrect status code: 101
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816222224-6986 -n pause-20210816222224-6986
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816222224-6986 -n pause-20210816222224-6986: exit status 2 (561.234171ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/VerifyStatus FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/VerifyStatus]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210816222224-6986 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210816222224-6986 logs -n 25: exit status 110 (12.379327901s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                  |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| -p      | multinode-20210816215441-6986          | multinode-20210816215441-6986          | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:08:07 UTC | Mon, 16 Aug 2021 22:11:11 UTC |
	|         | stop                                   |                                        |         |         |                               |                               |
	| start   | -p                                     | multinode-20210816215441-6986          | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:11:11 UTC | Mon, 16 Aug 2021 22:15:19 UTC |
	|         | multinode-20210816215441-6986          |                                        |         |         |                               |                               |
	|         | --wait=true -v=8                       |                                        |         |         |                               |                               |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | --driver=kvm2                          |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| start   | -p                                     | multinode-20210816215441-6986-m03      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:19 UTC | Mon, 16 Aug 2021 22:16:20 UTC |
	|         | multinode-20210816215441-6986-m03      |                                        |         |         |                               |                               |
	|         | --driver=kvm2                          |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| delete  | -p                                     | multinode-20210816215441-6986-m03      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:16:20 UTC | Mon, 16 Aug 2021 22:16:21 UTC |
	|         | multinode-20210816215441-6986-m03      |                                        |         |         |                               |                               |
	| delete  | -p                                     | multinode-20210816215441-6986          | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:16:21 UTC | Mon, 16 Aug 2021 22:16:23 UTC |
	|         | multinode-20210816215441-6986          |                                        |         |         |                               |                               |
	| start   | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:18:07 UTC | Mon, 16 Aug 2021 22:19:45 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | --wait=true --preload=false            |                                        |         |         |                               |                               |
	|         | --driver=kvm2                          |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0           |                                        |         |         |                               |                               |
	| ssh     | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:45 UTC | Mon, 16 Aug 2021 22:19:47 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | -- sudo crictl pull busybox            |                                        |         |         |                               |                               |
	| start   | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:48 UTC | Mon, 16 Aug 2021 22:20:39 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=kvm2         |                                        |         |         |                               |                               |
	|         |  --container-runtime=containerd        |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3           |                                        |         |         |                               |                               |
	| ssh     | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:39 UTC | Mon, 16 Aug 2021 22:20:39 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | -- sudo crictl image ls                |                                        |         |         |                               |                               |
	| delete  | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:39 UTC | Mon, 16 Aug 2021 22:20:40 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	| start   | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:40 UTC | Mon, 16 Aug 2021 22:21:45 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	|         | --memory=2048 --driver=kvm2            |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| stop    | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:46 UTC | Mon, 16 Aug 2021 22:21:46 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	|         | --cancel-scheduled                     |                                        |         |         |                               |                               |
	| stop    | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:58 UTC | Mon, 16 Aug 2021 22:22:05 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	|         | --schedule 5s                          |                                        |         |         |                               |                               |
	| delete  | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:23 UTC | Mon, 16 Aug 2021 22:22:24 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	| delete  | -p kubenet-20210816222224-6986         | kubenet-20210816222224-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:22:25 UTC |
	| delete  | -p false-20210816222225-6986           | false-20210816222225-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:25 UTC | Mon, 16 Aug 2021 22:22:25 UTC |
	| start   | -p                                     | force-systemd-env-20210816222224-6986  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:24:04 UTC |
	|         | force-systemd-env-20210816222224-6986  |                                        |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | -v=5 --driver=kvm2                     |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| -p      | force-systemd-env-20210816222224-6986  | force-systemd-env-20210816222224-6986  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:04 UTC | Mon, 16 Aug 2021 22:24:04 UTC |
	|         | ssh cat /etc/containerd/config.toml    |                                        |         |         |                               |                               |
	| delete  | -p                                     | force-systemd-env-20210816222224-6986  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:04 UTC | Mon, 16 Aug 2021 22:24:05 UTC |
	|         | force-systemd-env-20210816222224-6986  |                                        |         |         |                               |                               |
	| start   | -p pause-20210816222224-6986           | pause-20210816222224-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:24:28 UTC |
	|         | --memory=2048                          |                                        |         |         |                               |                               |
	|         | --install-addons=false                 |                                        |         |         |                               |                               |
	|         | --wait=all --driver=kvm2               |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| start   | -p                                     | kubernetes-upgrade-20210816222225-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:25 UTC | Mon, 16 Aug 2021 22:24:48 UTC |
	|         | kubernetes-upgrade-20210816222225-6986 |                                        |         |         |                               |                               |
	|         | --memory=2200                          |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0           |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=kvm2   |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| stop    | -p                                     | kubernetes-upgrade-20210816222225-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:49 UTC | Mon, 16 Aug 2021 22:24:53 UTC |
	|         | kubernetes-upgrade-20210816222225-6986 |                                        |         |         |                               |                               |
	| start   | -p                                     | offline-containerd-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:25:02 UTC |
	|         | offline-containerd-20210816222224-6986 |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048   |                                        |         |         |                               |                               |
	|         | --wait=true --driver=kvm2              |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| delete  | -p                                     | offline-containerd-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:02 UTC | Mon, 16 Aug 2021 22:25:03 UTC |
	|         | offline-containerd-20210816222224-6986 |                                        |         |         |                               |                               |
	| start   | -p pause-20210816222224-6986           | pause-20210816222224-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:28 UTC | Mon, 16 Aug 2021 22:25:13 UTC |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                     |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:25:29
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:25:29.865623   11635 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:25:29.865693   11635 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:25:29.865697   11635 out.go:311] Setting ErrFile to fd 2...
	I0816 22:25:29.865700   11635 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:25:29.865823   11635 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:25:29.866117   11635 out.go:305] Setting JSON to false
	I0816 22:25:29.903940   11635 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":4092,"bootTime":1629148638,"procs":190,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:25:29.904062   11635 start.go:121] virtualization: kvm guest
	I0816 22:25:29.906708   11635 out.go:177] * [stopped-upgrade-20210816222405-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:25:29.908312   11635 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:29.906879   11635 notify.go:169] Checking for updates...
	I0816 22:25:29.909776   11635 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:25:29.911276   11635 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:25:29.811958   10879 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:25:29.817441   10879 start.go:413] Will wait 60s for crictl version
	I0816 22:25:29.817496   10879 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:25:29.853053   10879 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0816 22:25:29.853115   10879 ssh_runner.go:149] Run: containerd --version
	I0816 22:25:29.886777   10879 ssh_runner.go:149] Run: containerd --version
	I0816 22:25:29.912690   11635 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:25:29.913065   11635 config.go:177] Loaded profile config "stopped-upgrade-20210816222405-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0816 22:25:29.913582   11635 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:29.913647   11635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:29.927429   11635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45201
	I0816 22:25:29.927806   11635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:29.928407   11635 main.go:130] libmachine: Using API Version  1
	I0816 22:25:29.928426   11635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:29.928828   11635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:29.928990   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:29.930681   11635 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0816 22:25:29.930718   11635 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:25:29.931039   11635 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:29.931072   11635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:29.942479   11635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40149
	I0816 22:25:29.942868   11635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:29.943307   11635 main.go:130] libmachine: Using API Version  1
	I0816 22:25:29.943323   11635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:29.943763   11635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:29.943943   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:29.977758   11635 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 22:25:29.977793   11635 start.go:278] selected driver: kvm2
	I0816 22:25:29.977800   11635 start.go:751] validating driver "kvm2" against &{Name:stopped-upgrade-20210816222405-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.16.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:stopped-upgra
de-20210816222405-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.139 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:29.977911   11635 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 22:25:29.979236   11635 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:29.979388   11635 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 22:25:29.992607   11635 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0816 22:25:29.992893   11635 cni.go:93] Creating CNI manager for ""
	I0816 22:25:29.992910   11635 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:25:29.992918   11635 start_flags.go:277] config:
	{Name:stopped-upgrade-20210816222405-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.16.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:stopped-upgrade-20210816222405-6986 Namespace:default APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.139 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:29.993026   11635 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:29.994994   11635 out.go:177] * Starting control plane node stopped-upgrade-20210816222405-6986 in cluster stopped-upgrade-20210816222405-6986
	I0816 22:25:29.995015   11635 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	W0816 22:25:30.055078   11635 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.20.0-containerd-overlay2-amd64.tar.lz4 status code: 404
	I0816 22:25:30.055326   11635 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/stopped-upgrade-20210816222405-6986/config.json ...
	I0816 22:25:30.055447   11635 cache.go:108] acquiring lock: {Name:mk44a899d3e13d1e1a41236ca93bfa4c540d90ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055469   11635 cache.go:108] acquiring lock: {Name:mk54690e8b8165106a936f57493f4a5f28a2f038 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055479   11635 cache.go:108] acquiring lock: {Name:mk5fa6434b4b67f17fb247c2a1febaaba95afc21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055522   11635 cache.go:108] acquiring lock: {Name:mk248d78835e4d0dd7deebdde93e709059900376 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055596   11635 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:25:30.055621   11635 start.go:313] acquiring machines lock for stopped-upgrade-20210816222405-6986: {Name:mk808edd60d1305a42bb85791729eff4573dbb15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 22:25:30.055639   11635 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.20.0
	I0816 22:25:30.055653   11635 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.20.0
	I0816 22:25:30.055661   11635 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.20.0
	I0816 22:25:30.055667   11635 cache.go:108] acquiring lock: {Name:mk74554b1ad079cff2e1d01801c80cf158c3d0db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055449   11635 cache.go:108] acquiring lock: {Name:mk21e396b81c69d7c5a1e31157ecfaad7d142ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055784   11635 image.go:133] retrieving image: k8s.gcr.io/coredns:1.7.0
	I0816 22:25:30.055784   11635 cache.go:108] acquiring lock: {Name:mk4f799cc9e7aed39d4f75ef9ab783b3653bcaec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055644   11635 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0816 22:25:30.055857   11635 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.20.0
	I0816 22:25:30.055838   11635 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 404.352µs
	I0816 22:25:30.055869   11635 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0816 22:25:30.055492   11635 cache.go:108] acquiring lock: {Name:mk35426dc160f9577622987ee511aee9c6194a42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055863   11635 cache.go:108] acquiring lock: {Name:mk7034d6c48699d6f6387acd67a2f9aff6580cde Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055899   11635 cache.go:108] acquiring lock: {Name:mk83079d57b39c409d978da9d87f61040fa2879d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055991   11635 image.go:133] retrieving image: k8s.gcr.io/pause:3.2
	I0816 22:25:30.055993   11635 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0816 22:25:30.056005   11635 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0816 22:25:30.056013   11635 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 117.345µs
	I0816 22:25:30.056023   11635 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 216.257µs
	I0816 22:25:30.056038   11635 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0816 22:25:30.056041   11635 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0816 22:25:30.056058   11635 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.13-0
	I0816 22:25:30.069442   11635 image.go:171] found k8s.gcr.io/pause:3.2 locally: &{UncompressedImageCore:0xc0015120b8 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:30.069470   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2
	I0816 22:25:30.125132   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists
	I0816 22:25:30.125180   11635 cache.go:97] cache image "k8s.gcr.io/pause:3.2" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 69.39782ms
	I0816 22:25:30.125195   11635 cache.go:81] save to tar file k8s.gcr.io/pause:3.2 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded
	I0816 22:25:30.603136   11635 image.go:171] found k8s.gcr.io/coredns:1.7.0 locally: &{UncompressedImageCore:0xc0001b80d0 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:30.603171   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0
	I0816 22:25:30.944848   11635 start.go:317] acquired machines lock for "stopped-upgrade-20210816222405-6986" in 889.202815ms
	I0816 22:25:30.944895   11635 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:25:30.944913   11635 fix.go:55] fixHost starting: 
	I0816 22:25:30.945363   11635 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:30.945411   11635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:30.958997   11635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43851
	I0816 22:25:30.959558   11635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:30.960174   11635 main.go:130] libmachine: Using API Version  1
	I0816 22:25:30.960191   11635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:30.960618   11635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:30.961025   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:30.961188   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetState
	I0816 22:25:30.964772   11635 fix.go:108] recreateIfNeeded on stopped-upgrade-20210816222405-6986: state=Stopped err=<nil>
	I0816 22:25:30.964804   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	W0816 22:25:30.964947   11635 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:25:29.920921   10879 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on containerd 1.4.9 ...
	I0816 22:25:29.920959   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetIP
	I0816 22:25:29.926612   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:29.927031   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:29.927059   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:29.927194   10879 ssh_runner.go:149] Run: grep 192.168.116.1	host.minikube.internal$ /etc/hosts
	I0816 22:25:29.931935   10879 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.116.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:29.945360   10879 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 22:25:29.945419   10879 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:29.980071   10879 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:25:29.980092   10879 containerd.go:517] Images already preloaded, skipping extraction
	I0816 22:25:29.980136   10879 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:30.014510   10879 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:25:30.014534   10879 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:25:30.014596   10879 ssh_runner.go:149] Run: sudo crictl info
	I0816 22:25:30.049631   10879 cni.go:93] Creating CNI manager for ""
	I0816 22:25:30.049653   10879 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:25:30.049662   10879 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:25:30.049673   10879 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.116.91 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20210816222225-6986 NodeName:kubernetes-upgrade-20210816222225-6986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.116.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.116.91 Cgro
upDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:25:30.049804   10879 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.116.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "kubernetes-upgrade-20210816222225-6986"
	  kubeletExtraArgs:
	    node-ip: 192.168.116.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.116.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:25:30.049880   10879 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-20210816222225-6986 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.116.91 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210816222225-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:25:30.049928   10879 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0816 22:25:30.057977   10879 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:25:30.058039   10879 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:25:30.069033   10879 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (559 bytes)
	I0816 22:25:30.084505   10879 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0816 22:25:30.099259   10879 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0816 22:25:30.117789   10879 ssh_runner.go:149] Run: grep 192.168.116.91	control-plane.minikube.internal$ /etc/hosts
	I0816 22:25:30.123120   10879 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.116.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:30.137790   10879 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986 for IP: 192.168.116.91
	I0816 22:25:30.137839   10879 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:25:30.137860   10879 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:25:30.137924   10879 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/client.key
	I0816 22:25:30.137959   10879 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/apiserver.key.0bcaee26
	I0816 22:25:30.137982   10879 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/proxy-client.key
	I0816 22:25:30.138107   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem (1338 bytes)
	W0816 22:25:30.138164   10879 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
	I0816 22:25:30.138180   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 22:25:30.138217   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:25:30.138260   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:25:30.138286   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1675 bytes)
	I0816 22:25:30.138335   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:25:30.139672   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:25:30.162501   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 22:25:30.187988   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:25:30.208990   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 22:25:30.232139   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:25:30.260638   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 22:25:30.287352   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:25:30.316722   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 22:25:30.346450   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
	I0816 22:25:30.369471   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
	I0816 22:25:30.397914   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:25:30.433581   10879 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:25:30.454657   10879 ssh_runner.go:149] Run: openssl version
	I0816 22:25:30.463107   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
	I0816 22:25:30.475906   10879 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6986.pem
	I0816 22:25:30.482988   10879 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:49 /usr/share/ca-certificates/6986.pem
	I0816 22:25:30.483059   10879 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
	I0816 22:25:30.492438   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
	I0816 22:25:30.507780   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
	I0816 22:25:30.522931   10879 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/69862.pem
	I0816 22:25:30.529692   10879 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:49 /usr/share/ca-certificates/69862.pem
	I0816 22:25:30.529753   10879 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
	I0816 22:25:30.540308   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:25:30.555395   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:25:30.571425   10879 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:30.577689   10879 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:42 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:30.577743   10879 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:30.588310   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:25:30.599536   10879 kubeadm.go:390] StartCluster: {Name:kubernetes-upgrade-20210816222225-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210816222225-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.116.91 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:30.599646   10879 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0816 22:25:30.599708   10879 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:25:30.645578   10879 cri.go:76] found id: ""
	I0816 22:25:30.645662   10879 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:25:30.656753   10879 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:25:30.656775   10879 kubeadm.go:600] restartCluster start
	I0816 22:25:30.656823   10879 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:25:30.665356   10879 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:30.666456   10879 kubeconfig.go:117] verify returned: extract IP: "kubernetes-upgrade-20210816222225-6986" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:30.666789   10879 kubeconfig.go:128] "kubernetes-upgrade-20210816222225-6986" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:25:30.667454   10879 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:25:30.668301   10879 kapi.go:59] client config for kubernetes-upgrade-20210816222225-6986: &rest.Config{Host:"https://192.168.116.91:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/k
ubernetes-upgrade-20210816222225-6986/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 22:25:30.670154   10879 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:25:30.680113   10879 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta2
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.116.91
	@@ -17,7 +17,7 @@
	     node-ip: 192.168.116.91
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta2
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.116.91"]
	@@ -31,7 +31,7 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-20210816222225-6986
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 dns:
	   type: CoreDNS
	@@ -39,8 +39,8 @@
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.116.91:2381
	-kubernetesVersion: v1.14.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.22.0-rc.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0816 22:25:30.680129   10879 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:25:30.680144   10879 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:25:30.680191   10879 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:25:30.726916   10879 cri.go:76] found id: ""
	I0816 22:25:30.726997   10879 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:25:30.746591   10879 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:25:30.758779   10879 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:25:30.758838   10879 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:25:30.769229   10879 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:25:30.769260   10879 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:30.999868   10879 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:32.699195   10879 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.699296673s)
	I0816 22:25:32.699238   10879 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:33.156614   10879 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:33.351483   10879 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:33.492135   10879 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:25:33.492214   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:34.016071   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:30.969443   11635 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-20210816222405-6986" ...
	I0816 22:25:30.969474   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .Start
	I0816 22:25:30.969661   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Ensuring networks are active...
	I0816 22:25:30.972266   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Ensuring network default is active
	I0816 22:25:30.972626   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Ensuring network minikube-net is active
	I0816 22:25:30.973378   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Getting domain xml...
	I0816 22:25:30.975969   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Creating domain...
	I0816 22:25:31.337824   11635 image.go:171] found k8s.gcr.io/kube-scheduler:v1.20.0 locally: &{UncompressedImageCore:0xc0005f0060 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:31.337868   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0
	I0816 22:25:31.505137   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Waiting to get IP...
	I0816 22:25:31.505762   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:31.507077   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has current primary IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:31.507133   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Found IP for machine: 192.168.94.139
	I0816 22:25:31.507148   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Reserving static IP address...
	I0816 22:25:31.507652   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "stopped-upgrade-20210816222405-6986", mac: "52:54:00:48:08:ec", ip: "192.168.94.139"} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:24:28 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:31.507684   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-20210816222405-6986", mac: "52:54:00:48:08:ec", ip: "192.168.94.139"}
	I0816 22:25:31.507704   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | Getting to WaitForSSH function...
	I0816 22:25:31.507722   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Reserved static IP address: 192.168.94.139
	I0816 22:25:31.507732   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Waiting for SSH to be available...
	I0816 22:25:31.513971   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:31.514423   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:24:28 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:31.514456   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:31.514613   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | Using SSH client type: external
	I0816 22:25:31.514740   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816222405-6986/id_rsa (-rw-------)
	I0816 22:25:31.514792   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.94.139 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816222405-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:25:31.514855   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | About to run SSH command:
	I0816 22:25:31.514871   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | exit 0
	I0816 22:25:32.653039   11635 image.go:171] found k8s.gcr.io/kube-controller-manager:v1.20.0 locally: &{UncompressedImageCore:0xc000114030 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:32.653105   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0
	I0816 22:25:32.791720   11635 image.go:171] found k8s.gcr.io/kube-apiserver:v1.20.0 locally: &{UncompressedImageCore:0xc001512008 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:32.791795   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0
	I0816 22:25:34.107008   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 exists
	I0816 22:25:34.107061   11635 cache.go:97] cache image "k8s.gcr.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0" took 4.051394985s
	I0816 22:25:34.107085   11635 cache.go:81] save to tar file k8s.gcr.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 succeeded
	I0816 22:25:34.315350   11635 image.go:171] found k8s.gcr.io/etcd:3.4.13-0 locally: &{UncompressedImageCore:0xc000114050 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:34.315401   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0
	I0816 22:25:34.340626   11635 image.go:171] found k8s.gcr.io/kube-proxy:v1.20.0 locally: &{UncompressedImageCore:0xc000010208 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:34.340673   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0
	I0816 22:25:34.809104   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 exists
	I0816 22:25:34.809158   11635 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0" took 4.753722894s
	I0816 22:25:34.809180   11635 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 succeeded
	I0816 22:25:34.516083   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:35.015857   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:35.519104   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:36.016162   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:36.515658   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:37.015674   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:37.515739   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:38.015673   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:38.515650   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:39.015668   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:39.515980   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:40.015682   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:40.515733   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:41.016134   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:41.516902   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:42.015609   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:42.516119   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:43.016137   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:43.515737   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:44.019076   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:40.829030   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 exists
	I0816 22:25:40.829145   11635 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0" took 10.773666591s
	I0816 22:25:40.829193   11635 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 succeeded
	I0816 22:25:41.800993   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 exists
	I0816 22:25:41.801047   11635 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0" took 11.745601732s
	I0816 22:25:41.801066   11635 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 succeeded
	I0816 22:25:43.868728   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 exists
	I0816 22:25:43.868783   11635 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0" took 13.813268232s
	I0816 22:25:43.868801   11635 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 succeeded
	I0816 22:25:44.515937   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:45.016224   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:45.516460   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:46.016178   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:46.516524   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:47.015786   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:47.516658   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:48.015708   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:48.515976   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:49.016470   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:50.674039   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | SSH cmd err, output: <nil>: 
	I0816 22:25:50.674356   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetConfigRaw
	I0816 22:25:50.674991   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetIP
	I0816 22:25:50.680301   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.680676   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:50.680700   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.680937   11635 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/stopped-upgrade-20210816222405-6986/config.json ...
	I0816 22:25:50.681113   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:50.681286   11635 machine.go:88] provisioning docker machine ...
	I0816 22:25:50.681307   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:50.681460   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetMachineName
	I0816 22:25:50.681591   11635 buildroot.go:166] provisioning hostname "stopped-upgrade-20210816222405-6986"
	I0816 22:25:50.681630   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetMachineName
	I0816 22:25:50.681772   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:50.686529   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.686855   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:50.686891   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.687006   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHPort
	I0816 22:25:50.687142   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:50.687255   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:50.687342   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHUsername
	I0816 22:25:50.687442   11635 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:50.687632   11635 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.94.139 22 <nil> <nil>}
	I0816 22:25:50.687655   11635 main.go:130] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-20210816222405-6986 && echo "stopped-upgrade-20210816222405-6986" | sudo tee /etc/hostname
	I0816 22:25:50.813329   11635 main.go:130] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-20210816222405-6986
	
	I0816 22:25:50.813359   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:50.818466   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.818799   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:50.818835   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.818990   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHPort
	I0816 22:25:50.819183   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:50.819328   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:50.819469   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHUsername
	I0816 22:25:50.819617   11635 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:50.819779   11635 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.94.139 22 <nil> <nil>}
	I0816 22:25:50.819808   11635 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-20210816222405-6986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-20210816222405-6986/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-20210816222405-6986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:25:50.843338   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 exists
	I0816 22:25:50.843375   11635 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0" took 20.787884619s
	I0816 22:25:50.843386   11635 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 succeeded
	I0816 22:25:50.843403   11635 cache.go:88] Successfully saved all images to host disk.
	I0816 22:25:50.936578   11635 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:25:50.936617   11635 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:25:50.936644   11635 buildroot.go:174] setting up certificates
	I0816 22:25:50.936656   11635 provision.go:83] configureAuth start
	I0816 22:25:50.936668   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetMachineName
	I0816 22:25:50.936926   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetIP
	I0816 22:25:50.941824   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.942162   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:50.942193   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.942301   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:50.946653   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.946938   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:50.946977   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.947097   11635 provision.go:138] copyHostCerts
	I0816 22:25:50.947147   11635 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:25:50.947156   11635 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:25:50.947203   11635 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:25:50.947271   11635 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:25:50.947280   11635 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:25:50.947296   11635 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
	I0816 22:25:50.947338   11635 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:25:50.947349   11635 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:25:50.947365   11635 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:25:50.947403   11635 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-20210816222405-6986 san=[192.168.94.139 192.168.94.139 localhost 127.0.0.1 minikube stopped-upgrade-20210816222405-6986]
	I0816 22:25:51.032615   11635 provision.go:172] copyRemoteCerts
	I0816 22:25:51.032661   11635 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:25:51.032682   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:51.037604   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.037874   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:51.037905   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.038009   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHPort
	I0816 22:25:51.038148   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:51.038246   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHUsername
	I0816 22:25:51.038339   11635 sshutil.go:53] new ssh client: &{IP:192.168.94.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816222405-6986/id_rsa Username:docker}
	I0816 22:25:51.123210   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:25:51.139587   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0816 22:25:51.155341   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:25:51.171555   11635 provision.go:86] duration metric: configureAuth took 234.887855ms
	I0816 22:25:51.171579   11635 buildroot.go:189] setting minikube options for container-runtime
	I0816 22:25:51.171722   11635 config.go:177] Loaded profile config "stopped-upgrade-20210816222405-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0816 22:25:51.171746   11635 machine.go:91] provisioned docker machine in 490.44902ms
	I0816 22:25:51.171754   11635 start.go:267] post-start starting for "stopped-upgrade-20210816222405-6986" (driver="kvm2")
	I0816 22:25:51.171760   11635 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:25:51.171782   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:51.172055   11635 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:25:51.172077   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:51.176933   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.177267   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:51.177298   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.177441   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHPort
	I0816 22:25:51.177602   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:51.177724   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHUsername
	I0816 22:25:51.177820   11635 sshutil.go:53] new ssh client: &{IP:192.168.94.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816222405-6986/id_rsa Username:docker}
	I0816 22:25:51.280843   11635 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:25:51.285664   11635 info.go:137] Remote host: Buildroot 2020.02.8
	I0816 22:25:51.285688   11635 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:25:51.285748   11635 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:25:51.285858   11635 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0816 22:25:51.285968   11635 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:25:51.293397   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:25:51.309338   11635 start.go:270] post-start completed in 137.570449ms
	I0816 22:25:51.309377   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:51.309623   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:51.315839   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.316232   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:51.316258   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.316441   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHPort
	I0816 22:25:51.316643   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:51.316819   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:51.316941   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHUsername
	I0816 22:25:51.317135   11635 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:51.317300   11635 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.94.139 22 <nil> <nil>}
	I0816 22:25:51.317315   11635 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0816 22:25:51.429012   11635 main.go:130] libmachine: SSH cmd err, output: <nil>: 1629152751.360423899
	
	I0816 22:25:51.429036   11635 fix.go:212] guest clock: 1629152751.360423899
	I0816 22:25:51.429046   11635 fix.go:225] Guest: 2021-08-16 22:25:51.360423899 +0000 UTC Remote: 2021-08-16 22:25:51.30960232 +0000 UTC m=+21.494401500 (delta=50.821579ms)
	I0816 22:25:51.429069   11635 fix.go:196] guest clock delta is within tolerance: 50.821579ms
	I0816 22:25:51.429077   11635 fix.go:57] fixHost completed within 20.484163709s
	I0816 22:25:51.429086   11635 start.go:80] releasing machines lock for "stopped-upgrade-20210816222405-6986", held for 20.484204502s
	I0816 22:25:51.429135   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:51.429381   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetIP
	I0816 22:25:51.434740   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.435145   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:51.435177   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.435313   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:51.435469   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:51.435909   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:51.436161   11635 ssh_runner.go:149] Run: systemctl --version
	I0816 22:25:51.436188   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:51.436224   11635 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:25:51.436266   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:51.441919   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.442250   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:51.442273   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.442407   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHPort
	I0816 22:25:51.442580   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:51.442707   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHUsername
	I0816 22:25:51.442844   11635 sshutil.go:53] new ssh client: &{IP:192.168.94.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816222405-6986/id_rsa Username:docker}
	I0816 22:25:51.442940   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.443308   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:51.443347   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.443442   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHPort
	I0816 22:25:51.443577   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:51.443703   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHUsername
	I0816 22:25:51.443818   11635 sshutil.go:53] new ssh client: &{IP:192.168.94.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816222405-6986/id_rsa Username:docker}
	I0816 22:25:51.580011   11635 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0816 22:25:51.580081   11635 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0816 22:25:51.617346   11635 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0816 22:25:51.629075   11635 docker.go:153] disabling docker service ...
	I0816 22:25:51.629131   11635 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:25:51.640426   11635 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:25:51.649102   11635 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:25:51.808471   11635 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:25:51.947554   11635 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:25:51.958967   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:25:51.978504   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuMiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CgoJW3BsdWdpbnMuImlvLmNvb
nRhaW5lcmQuZ3JwYy52MS5jcmkiXQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lc10KICAgICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmMub3B0aW9uc10KICAgICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZF0KICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgIFtwbHVnaW5zLmNyaS5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQuZCIKI
CAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0816 22:25:51.992652   11635 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:25:51.998569   11635 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:25:51.998615   11635 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:25:52.014222   11635 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:25:52.020197   11635 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:25:52.145893   11635 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:25:52.190011   11635 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0816 22:25:52.190112   11635 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:25:52.195055   11635 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0816 22:25:53.300309   11635 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:25:53.306277   11635 start.go:413] Will wait 60s for crictl version
	I0816 22:25:53.306323   11635 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:25:53.324774   11635 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.3
	RuntimeApiVersion:  v1alpha2
	I0816 22:25:53.324831   11635 ssh_runner.go:149] Run: containerd --version
	I0816 22:25:53.361138   11635 ssh_runner.go:149] Run: containerd --version
	I0816 22:25:49.515642   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:50.015846   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:50.516027   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:51.015729   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:51.516379   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:52.016301   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:52.516625   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:53.015666   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:53.516566   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:54.016518   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:53.402219   11635 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.4.3 ...
	I0816 22:25:53.402261   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetIP
	I0816 22:25:53.407765   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:53.408040   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:53.408078   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:53.408229   11635 ssh_runner.go:149] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0816 22:25:53.412479   11635 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:53.424358   11635 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0816 22:25:53.424401   11635 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:53.448218   11635 containerd.go:609] couldn't find preloaded image for "gcr.io/k8s-minikube/storage-provisioner:v5". assuming images are not preloaded.
	I0816 22:25:53.448242   11635 cache_images.go:78] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.20.0 k8s.gcr.io/kube-controller-manager:v1.20.0 k8s.gcr.io/kube-scheduler:v1.20.0 k8s.gcr.io/kube-proxy:v1.20.0 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0816 22:25:53.448286   11635 image.go:133] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0816 22:25:53.448315   11635 image.go:133] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:25:53.448336   11635 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.20.0
	I0816 22:25:53.448339   11635 image.go:133] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0816 22:25:53.448315   11635 image.go:133] retrieving image: k8s.gcr.io/pause:3.2
	I0816 22:25:53.448407   11635 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.13-0
	I0816 22:25:53.448419   11635 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.20.0
	I0816 22:25:53.448384   11635 image.go:133] retrieving image: k8s.gcr.io/coredns:1.7.0
	I0816 22:25:53.448483   11635 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.20.0
	I0816 22:25:53.448495   11635 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.20.0
	I0816 22:25:53.461290   11635 image.go:171] found k8s.gcr.io/pause:3.2 locally: &{UncompressedImageCore:0xc000010aa0 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:53.461345   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.2 | grep 80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c"
	I0816 22:25:53.970932   11635 cache_images.go:106] "k8s.gcr.io/pause:3.2" needs transfer: "k8s.gcr.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 22:25:53.970997   11635 cri.go:205] Removing image: k8s.gcr.io/pause:3.2
	I0816 22:25:53.971051   11635 ssh_runner.go:149] Run: which crictl
	I0816 22:25:53.982673   11635 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/pause:3.2
	I0816 22:25:53.998453   11635 image.go:171] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{UncompressedImageCore:0xc000010ac8 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:53.998519   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5 | grep 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0816 22:25:54.075847   11635 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2
	I0816 22:25:54.075942   11635 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.2
	I0816 22:25:54.205435   11635 image.go:171] found index.docker.io/kubernetesui/metrics-scraper:v1.0.4 locally: &{UncompressedImageCore:0xc0001141d0 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:54.205506   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/metrics-scraper:v1.0.4 | grep 86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4"
	I0816 22:25:54.279275   11635 image.go:171] found k8s.gcr.io/coredns:1.7.0 locally: &{UncompressedImageCore:0xc000114020 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:54.279354   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns:1.7.0 | grep bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	f04c445038901       6e38f40d628db       45 seconds ago       Exited              storage-provisioner       0                   0e10d9862204b
	e70dd80568a0a       296a6d5035e2d       56 seconds ago       Running             coredns                   1                   c649190b7c07d
	2585772c8a261       adb2816ea823a       57 seconds ago       Running             kube-proxy                2                   d73b4cafe25f0
	53780b2759956       3d174f00aa39e       About a minute ago   Running             kube-apiserver            2                   fb9f201b2c2e1
	76fef890edebe       6be0dc1302e30       About a minute ago   Running             kube-scheduler            2                   1718d2a0276ce
	69a7fab4848c4       0369cf4303ffd       About a minute ago   Running             etcd                      2                   3b9459ff3a0d8
	825e79d62718c       bc2bb319a7038       About a minute ago   Running             kube-controller-manager   2                   feab707eb735a
	7626b842ef886       3d174f00aa39e       About a minute ago   Created             kube-apiserver            1                   fb9f201b2c2e1
	9d9f34b35e099       adb2816ea823a       About a minute ago   Created             kube-proxy                1                   d73b4cafe25f0
	97c4cc3614116       6be0dc1302e30       About a minute ago   Created             kube-scheduler            1                   1718d2a0276ce
	3644e35e40a2f       0369cf4303ffd       About a minute ago   Created             etcd                      1                   3b9459ff3a0d8
	8c5f2c007cff4       bc2bb319a7038       About a minute ago   Created             kube-controller-manager   1                   feab707eb735a
	28c7161cd49a4       296a6d5035e2d       2 minutes ago        Exited              coredns                   0                   05c2427240818
	a8503bd796d5d       adb2816ea823a       2 minutes ago        Exited              kube-proxy                0                   a86c3b6ee3a70
	124fa393359f7       0369cf4303ffd       2 minutes ago        Exited              etcd                      0                   94a493a65b593
	8710cefecdbe5       6be0dc1302e30       2 minutes ago        Exited              kube-scheduler            0                   982e66890a90d
	38dc61b214a9c       3d174f00aa39e       2 minutes ago        Exited              kube-apiserver            0                   630ed9d4644e9
	
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2021-08-16 22:22:35 UTC, end at Mon 2021-08-16 22:25:58 UTC. --
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.233198734Z" level=info msg="StartContainer for \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.443953465Z" level=info msg="StopPodSandbox for \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.444769081Z" level=info msg="Container to stop \"28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.461877463Z" level=info msg="StartContainer for \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\" returns successfully"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536079353Z" level=info msg="TearDown network for sandbox \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\" successfully"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536191167Z" level=info msg="StopPodSandbox for \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\" returns successfully"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536962082Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-gkxhz,Uid:5aa76749-775e-423d-bbf9-680a20a27051,Namespace:kube-system,Attempt:1,}"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.776744568Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb pid=5007
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.290447333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-gkxhz,Uid:5aa76749-775e-423d-bbf9-680a20a27051,Namespace:kube-system,Attempt:1,} returns sandbox id \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\""
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.300600113Z" level=info msg="CreateContainer within sandbox \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.389478760Z" level=info msg="CreateContainer within sandbox \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\""
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.397162604Z" level=info msg="StartContainer for \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\""
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.594046909Z" level=info msg="StartContainer for \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\" returns successfully"
	Aug 16 22:25:11 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:11.852957632Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:4f138dc7-da0e-4775-b4de-b0f7d616b212,Namespace:kube-system,Attempt:0,}"
	Aug 16 22:25:11 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:11.903771908Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c pid=5174
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.439549893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:4f138dc7-da0e-4775-b4de-b0f7d616b212,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\""
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.451930506Z" level=info msg="CreateContainer within sandbox \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.521875733Z" level=info msg="CreateContainer within sandbox \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.523292924Z" level=info msg="StartContainer for \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.851898064Z" level=info msg="StartContainer for \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\" returns successfully"
	Aug 16 22:25:32 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:32.698376142Z" level=info msg="Finish piping stderr of container \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
	Aug 16 22:25:32 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:32.700549219Z" level=info msg="Finish piping stdout of container \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
	Aug 16 22:25:32 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:32.702928257Z" level=info msg="TaskExit event &TaskExit{ContainerID:f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a,ID:f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a,Pid:5251,ExitStatus:255,ExitedAt:2021-08-16 22:25:32.702245647 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:25:32 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:32.834950240Z" level=info msg="shim disconnected" id=f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a
	Aug 16 22:25:32 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:32.835568670Z" level=error msg="copy shim log" error="read /proc/self/fd/118: file already closed"
	
	* 
	* ==> coredns [28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	I0816 22:24:19.170128       1 trace.go:205] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.168) (total time: 30001ms):
	Trace[2019727887]: [30.001909435s] [30.001909435s] END
	E0816 22:24:19.170279       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0816 22:24:19.171047       1 trace.go:205] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.166) (total time: 30004ms):
	Trace[939984059]: [30.004733433s] [30.004733433s] END
	E0816 22:24:19.171149       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0816 22:24:19.171258       1 trace.go:205] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.166) (total time: 30004ms):
	Trace[911902081]: [30.004945736s] [30.004945736s] END
	E0816 22:24:19.171265       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> coredns [e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7ae91e86dd75dee9ae501cb58003198b
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.985023] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1731 comm=systemd-network
	[  +1.088197] vboxguest: loading out-of-tree module taints kernel.
	[  +0.006251] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.889854] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[ +16.286436] systemd-fstab-generator[2098]: Ignoring "noauto" for root device
	[  +0.258185] systemd-fstab-generator[2128]: Ignoring "noauto" for root device
	[  +0.135377] systemd-fstab-generator[2143]: Ignoring "noauto" for root device
	[  +0.180446] systemd-fstab-generator[2173]: Ignoring "noauto" for root device
	[Aug16 22:23] systemd-fstab-generator[2381]: Ignoring "noauto" for root device
	[ +20.504547] systemd-fstab-generator[2808]: Ignoring "noauto" for root device
	[ +20.717915] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.551219] kauditd_printk_skb: 104 callbacks suppressed
	[Aug16 22:24] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.792051] systemd-fstab-generator[3754]: Ignoring "noauto" for root device
	[  +0.176916] systemd-fstab-generator[3767]: Ignoring "noauto" for root device
	[  +0.230657] systemd-fstab-generator[3792]: Ignoring "noauto" for root device
	[  +4.083098] kauditd_printk_skb: 2 callbacks suppressed
	[  +3.840195] NFSD: Unable to end grace period: -110
	[  +4.324119] systemd-fstab-generator[4543]: Ignoring "noauto" for root device
	[  +6.680726] kauditd_printk_skb: 29 callbacks suppressed
	[Aug16 22:25] kauditd_printk_skb: 14 callbacks suppressed
	[ +12.641213] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.313576] systemd-fstab-generator[5666]: Ignoring "noauto" for root device
	[  +0.847782] systemd-fstab-generator[5723]: Ignoring "noauto" for root device
	[  +1.051927] systemd-fstab-generator[5775]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [124fa393359f758ea47161b345d2cab4ce486d4473a4caad483449464d44315f] <==
	* 2021-08-16 22:23:41.064197 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210816222224-6986\" " with result "range_response_count:1 size:5052" took too long (6.421187445s) to execute
	2021-08-16 22:23:41.065847 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:341" took too long (6.446897155s) to execute
	2021-08-16 22:23:41.066285 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/node-controller\" " with result "range_response_count:1 size:242" took too long (5.09674902s) to execute
	2021-08-16 22:23:41.068005 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (6.28196539s) to execute
	2021-08-16 22:23:41.068259 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (763.710719ms) to execute
	2021-08-16 22:23:41.880435 W | etcdserver: read-only range request "key:\"/registry/masterleases/192.168.50.226\" " with result "range_response_count:0 size:5" took too long (776.335267ms) to execute
	2021-08-16 22:23:41.881080 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (597.366064ms) to execute
	2021-08-16 22:23:41.882354 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:4569" took too long (763.841142ms) to execute
	2021-08-16 22:23:41.883287 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (621.677263ms) to execute
	2021-08-16 22:23:41.884722 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (481.499599ms) to execute
	2021-08-16 22:23:41.885189 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-pause-20210816222224-6986\" " with result "range_response_count:1 size:5421" took too long (772.180278ms) to execute
	2021-08-16 22:23:42.453217 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (290.061418ms) to execute
	2021-08-16 22:23:42.455427 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (285.893643ms) to execute
	2021-08-16 22:23:42.456943 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-pause-20210816222224-6986\" " with result "range_response_count:1 size:6314" took too long (153.946258ms) to execute
	2021-08-16 22:23:42.458024 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (177.825431ms) to execute
	2021-08-16 22:23:44.267832 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:23:54.092150 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (701.802797ms) to execute
	2021-08-16 22:23:54.093518 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-ctgf5\" " with result "range_response_count:1 size:4473" took too long (1.090386256s) to execute
	2021-08-16 22:23:54.267392 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:23:57.768234 W | etcdserver: request "header:<ID:4263355585347158035 > lease_revoke:<id:3b2a7b510fcb7e67>" with result "size:29" took too long (771.90226ms) to execute
	2021-08-16 22:23:57.768903 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (374.444829ms) to execute
	2021-08-16 22:23:57.769379 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-ctgf5\" " with result "range_response_count:1 size:4473" took too long (765.115046ms) to execute
	2021-08-16 22:24:04.267548 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:24:14.267958 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:24:24.268321 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> etcd [3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423] <==
	* 
	* ==> etcd [69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549] <==
	* 2021-08-16 22:24:53.773065 W | auth: simple token is not cryptographically signed
	2021-08-16 22:24:53.837118 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	raft2021/08/16 22:24:53 INFO: e840193bf29c3b2a switched to configuration voters=(16735403960572853034)
	2021-08-16 22:24:53.849298 I | etcdserver/membership: added member e840193bf29c3b2a [https://192.168.50.226:2380] to cluster 99b90e1bea73c730
	2021-08-16 22:24:53.860198 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-16 22:24:53.864997 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-16 22:24:53.865214 I | embed: listening for peers on 192.168.50.226:2380
	2021-08-16 22:24:53.868083 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-16 22:24:53.871735 I | etcdserver/api: enabled capabilities for version 3.4
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a is starting a new election at term 2
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a became candidate at term 3
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a received MsgVoteResp from e840193bf29c3b2a at term 3
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a became leader at term 3
	raft2021/08/16 22:24:54 INFO: raft.node: e840193bf29c3b2a elected leader e840193bf29c3b2a at term 3
	2021-08-16 22:24:54.968820 I | embed: ready to serve client requests
	2021-08-16 22:24:54.969394 I | etcdserver: published {Name:pause-20210816222224-6986 ClientURLs:[https://192.168.50.226:2379]} to cluster 99b90e1bea73c730
	2021-08-16 22:24:54.971284 I | embed: serving client requests on 192.168.50.226:2379
	2021-08-16 22:24:54.971462 I | embed: ready to serve client requests
	2021-08-16 22:24:54.973508 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-16 22:25:03.067902 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-gkxhz\" " with result "range_response_count:1 size:4860" took too long (140.807991ms) to execute
	2021-08-16 22:25:06.747736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:25:08.138740 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:25:10.645123 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:3838" took too long (108.124514ms) to execute
	2021-08-16 22:25:10.645989 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:665" took too long (107.967343ms) to execute
	2021-08-16 22:25:18.137756 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  22:26:08 up 3 min,  0 users,  load average: 1.64, 1.38, 0.59
	Linux pause-20210816222224-6986 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [38dc61b214a9cbd019de4ca9ab52fb6baf728336de6d715df22b027522ad8b20] <==
	* I0816 22:23:41.890272       1 trace.go:205] Trace[914939944]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:23:41.117) (total time: 772ms):
	Trace[914939944]: [772.29448ms] [772.29448ms] END
	I0816 22:23:41.897880       1 trace.go:205] Trace[372773048]: "List" url:/api/v1/nodes,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/system:serviceaccount:kube-system:node-controller,client:192.168.50.226,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:41.117) (total time: 780ms):
	Trace[372773048]: ---"Listing from storage done" 773ms (22:23:00.891)
	Trace[372773048]: [780.024685ms] [780.024685ms] END
	I0816 22:23:41.899245       1 trace.go:205] Trace[189474875]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-20210816222224-6986,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.50.226,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (16-Aug-2021 22:23:41.107) (total time: 791ms):
	Trace[189474875]: ---"About to write a response" 791ms (22:23:00.899)
	Trace[189474875]: [791.769473ms] [791.769473ms] END
	I0816 22:23:41.914143       1 trace.go:205] Trace[1803257945]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (16-Aug-2021 22:23:41.101) (total time: 812ms):
	Trace[1803257945]: ---"initial value restored" 795ms (22:23:00.897)
	Trace[1803257945]: [812.099383ms] [812.099383ms] END
	I0816 22:23:46.219827       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0816 22:23:46.322056       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0816 22:23:54.101003       1 trace.go:205] Trace[1429856954]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-ctgf5,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.50.1,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:53.002) (total time: 1098ms):
	Trace[1429856954]: ---"About to write a response" 1098ms (22:23:00.100)
	Trace[1429856954]: [1.0988209s] [1.0988209s] END
	I0816 22:23:56.194218       1 client.go:360] parsed scheme: "passthrough"
	I0816 22:23:56.194943       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 22:23:56.195388       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0816 22:23:57.770900       1 trace.go:205] Trace[2103117378]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-ctgf5,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.50.1,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:57.002) (total time: 767ms):
	Trace[2103117378]: ---"About to write a response" 767ms (22:23:00.770)
	Trace[2103117378]: [767.944134ms] [767.944134ms] END
	I0816 22:24:32.818404       1 client.go:360] parsed scheme: "passthrough"
	I0816 22:24:32.818597       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 22:24:32.818691       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-apiserver [53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf] <==
	* I0816 22:24:59.052878       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0816 22:24:59.052897       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0816 22:24:59.071128       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	I0816 22:24:59.071704       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
	I0816 22:24:59.072328       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0816 22:24:59.072872       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
	I0816 22:24:59.173327       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0816 22:24:59.176720       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	E0816 22:24:59.181278       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0816 22:24:59.206356       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0816 22:24:59.225165       1 cache.go:39] Caches are synced for autoregister controller
	I0816 22:24:59.227741       1 apf_controller.go:299] Running API Priority and Fairness config worker
	I0816 22:24:59.230223       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0816 22:24:59.244026       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 22:24:59.248943       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0816 22:25:00.021310       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0816 22:25:00.022052       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0816 22:25:00.034218       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0816 22:25:01.108795       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0816 22:25:01.182177       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0816 22:25:01.279321       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0816 22:25:01.344553       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 22:25:01.382891       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 22:25:11.471022       1 controller.go:611] quota admission added evaluator for: endpoints
	I0816 22:25:13.002505       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612] <==
	* 
	* ==> kube-controller-manager [825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7] <==
	* W0816 22:26:05.643117       1 reflector.go:436] k8s.io/client-go/metadata/metadatainformer/informer.go:90: watch of *v1.PartialObjectMetadata ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643146       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.LimitRange ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643155       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Namespace ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643179       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.StatefulSet ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643208       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolume ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643234       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ResourceQuota ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643270       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643263       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ControllerRevision ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643305       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643314       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolumeClaim ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643340       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643359       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PriorityClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643375       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.Ingress ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643397       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643403       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PodDisruptionBudget ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643441       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.RoleBinding ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643454       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.PodSecurityPolicy ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643483       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643499       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.NetworkPolicy ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643529       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ValidatingWebhookConfiguration ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643538       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.CSIStorageCapacity ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643571       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Endpoints ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643579       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ReplicationController ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643697       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ClusterRole ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643952       1 reflector.go:436] k8s.io/client-go/metadata/metadatainformer/informer.go:90: watch of *v1.PartialObjectMetadata ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	* 
	* ==> kube-controller-manager [8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4] <==
	* 
	* ==> kube-proxy [2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2] <==
	* I0816 22:25:00.641886       1 node.go:172] Successfully retrieved node IP: 192.168.50.226
	I0816 22:25:00.641938       1 server_others.go:140] Detected node IP 192.168.50.226
	W0816 22:25:00.642012       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0816 22:25:00.805515       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0816 22:25:00.805539       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0816 22:25:00.805560       1 server_others.go:212] Using iptables Proxier.
	I0816 22:25:00.806059       1 server.go:643] Version: v1.21.3
	I0816 22:25:00.807251       1 config.go:315] Starting service config controller
	I0816 22:25:00.807281       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:25:00.807307       1 config.go:224] Starting endpoint slice config controller
	I0816 22:25:00.807313       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:25:00.812511       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 22:25:00.816722       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:25:00.907844       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 22:25:00.907906       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d] <==
	* 
	* ==> kube-proxy [a8503bd796d5d979a6e1b8b5154986e8b77de391b4f091211451ea5f52808e52] <==
	* I0816 22:23:49.316430       1 node.go:172] Successfully retrieved node IP: 192.168.50.226
	I0816 22:23:49.316608       1 server_others.go:140] Detected node IP 192.168.50.226
	W0816 22:23:49.316822       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0816 22:23:49.402698       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0816 22:23:49.403462       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0816 22:23:49.404047       1 server_others.go:212] Using iptables Proxier.
	I0816 22:23:49.407950       1 server.go:643] Version: v1.21.3
	I0816 22:23:49.410864       1 config.go:315] Starting service config controller
	I0816 22:23:49.413112       1 config.go:224] Starting endpoint slice config controller
	I0816 22:23:49.419474       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:23:49.421254       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:23:49.413718       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:23:49.425958       1 shared_informer.go:247] Caches are synced for service config 
	W0816 22:23:49.425586       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:23:49.520425       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1] <==
	* I0816 22:24:59.166481       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0816 22:24:59.178395       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:24:59.177851       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0816 22:24:59.194249       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:24:59.304036       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	W0816 22:25:44.233423       1 reflector.go:436] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	I0816 22:25:55.632239       1 trace.go:205] Trace[189707582]: "Reflector ListAndWatch" name:k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 (16-Aug-2021 22:25:45.628) (total time: 10003ms):
	Trace[189707582]: [10.003912236s] [10.003912236s] END
	E0816 22:25:55.632411       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.50.226:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&resourceVersion=484": net/http: TLS handshake timeout
	W0816 22:25:58.190260       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:25:58.190470       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PodDisruptionBudget ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:25:58.190568       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:25:58.190952       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ReplicaSet ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:25:58.191125       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:25:58.191321       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.StatefulSet ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:25:58.191475       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.StorageClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:25:58.191547       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.CSIStorageCapacity ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:25:58.191835       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolume ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:25:58.192015       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ReplicationController ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:25:58.192218       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.CSINode ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:25:58.192267       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolumeClaim ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:25:58.192414       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	I0816 22:26:08.727084       1 trace.go:205] Trace[1389195565]: "Reflector ListAndWatch" name:k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206 (16-Aug-2021 22:25:58.723) (total time: 10003ms):
	Trace[1389195565]: [10.003422535s] [10.003422535s] END
	E0816 22:26:08.727119       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.50.226:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&resourceVersion=484": net/http: TLS handshake timeout
	
	* 
	* ==> kube-scheduler [8710cefecdbe5d31cd44e9ae3378bc08cbc56001326a1cb38026755196cac7d1] <==
	* E0816 22:23:21.172468       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:23:21.189536       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:23:21.300836       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.329219       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.448607       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:23:21.504104       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.504531       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:23:21.597849       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:23:21.612843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:23:21.671333       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:23:21.827198       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:23:21.852843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:23:21.867015       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.910139       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:23:23.291774       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.356078       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.452841       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:23:23.464942       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:23:23.644764       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:23:23.649142       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.710606       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.980099       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:23:24.052112       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:23:24.168543       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0816 22:23:30.043826       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a] <==
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:22:35 UTC, end at Mon 2021-08-16 22:26:09 UTC. --
	Aug 16 22:24:58 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:58.919357    4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: E0816 22:24:59.020392    4551 kubelet.go:2291] "Error getting node" err="node \"pause-20210816222224-6986\" not found"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.121233    4551 kuberuntime_manager.go:1044] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.122462    4551 kubelet_network.go:76] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.228577    4551 kubelet_node_status.go:109] "Node was previously registered" node="pause-20210816222224-6986"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.228853    4551 kubelet_node_status.go:74] "Successfully registered node" node="pause-20210816222224-6986"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.536346    4551 apiserver.go:52] "Watching apiserver"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.540959    4551 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.541581    4551 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.609734    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3c0e0899-31c1-477a-a6d4-2844091deea2-kube-proxy\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.610130    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c0e0899-31c1-477a-a6d4-2844091deea2-xtables-lock\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.610271    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c0e0899-31c1-477a-a6d4-2844091deea2-lib-modules\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.610503    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2grh\" (UniqueName: \"kubernetes.io/projected/3c0e0899-31c1-477a-a6d4-2844091deea2-kube-api-access-b2grh\") pod \"kube-proxy-7l59t\" (UID: \"3c0e0899-31c1-477a-a6d4-2844091deea2\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.711424    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgpd2\" (UniqueName: \"kubernetes.io/projected/5aa76749-775e-423d-bbf9-680a20a27051-kube-api-access-rgpd2\") pod \"coredns-558bd4d5db-gkxhz\" (UID: \"5aa76749-775e-423d-bbf9-680a20a27051\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.712578    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5aa76749-775e-423d-bbf9-680a20a27051-config-volume\") pod \"coredns-558bd4d5db-gkxhz\" (UID: \"5aa76749-775e-423d-bbf9-680a20a27051\") "
	Aug 16 22:24:59 pause-20210816222224-6986 kubelet[4551]: I0816 22:24:59.713123    4551 reconciler.go:157] "Reconciler: start to sync state"
	Aug 16 22:25:00 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:00.142816    4551 scope.go:111] "RemoveContainer" containerID="9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d"
	Aug 16 22:25:03 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:03.115940    4551 prober_manager.go:255] "Failed to trigger a manual run" probe="Readiness"
	Aug 16 22:25:11 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:11.548694    4551 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:25:11 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:11.620746    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4f138dc7-da0e-4775-b4de-b0f7d616b212-tmp\") pod \"storage-provisioner\" (UID: \"4f138dc7-da0e-4775-b4de-b0f7d616b212\") "
	Aug 16 22:25:11 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:11.621027    4551 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7pzn\" (UniqueName: \"kubernetes.io/projected/4f138dc7-da0e-4775-b4de-b0f7d616b212-kube-api-access-n7pzn\") pod \"storage-provisioner\" (UID: \"4f138dc7-da0e-4775-b4de-b0f7d616b212\") "
	Aug 16 22:25:19 pause-20210816222224-6986 kubelet[4551]: I0816 22:25:19.625547    4551 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 16 22:25:19 pause-20210816222224-6986 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:25:19 pause-20210816222224-6986 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:25:19 pause-20210816222224-6986 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 80 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc00032b490, 0x0)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc00032b480)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0003e1260, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0003bd400, 0x18e5530, 0xc00032a600, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0001b7d20)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0001b7d20, 0x18b3d60, 0xc0001bb8c0, 0xc00038ff01, 0xc000136600)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0001b7d20, 0x3b9aca00, 0x0, 0x48ef01, 0xc000136600)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0001b7d20, 0x3b9aca00, 0xc000136600)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:26:08.349607   11874 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	E0816 22:26:08.569481   11874 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:26:08Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log\": lstat /var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:26:08Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log\\\": lstat /var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:26:08.775754   11874 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:26:08Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log\": lstat /var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:26:08Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log\\\": lstat /var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:26:08.863604   11874 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:26:08Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log\": lstat /var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:26:08Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log\\\": lstat /var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:26:08.945562   11874 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:26:08Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log\": lstat /var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:26:08Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log\\\": lstat /var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:26:09.125057   11874 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:26:09Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log\": lstat /var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:26:09Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log\\\": lstat /var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log: no such file or directory\"\n\n** /stderr **"
	! unable to fetch logs for: describe nodes, etcd [3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423], kube-apiserver [7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612], kube-controller-manager [8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4], kube-proxy [9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d], kube-scheduler [97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a]

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestPause/serial/VerifyStatus (13.44s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (10.66s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210816222224-6986 --alsologtostderr -v=5
pause_test.go:107: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-20210816222224-6986 --alsologtostderr -v=5: exit status 80 (6.091649059s)

                                                
                                                
-- stdout --
	* Pausing node pause-20210816222224-6986 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 22:26:10.242180   11986 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:26:10.242434   11986 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:26:10.242447   11986 out.go:311] Setting ErrFile to fd 2...
	I0816 22:26:10.242451   11986 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:26:10.242590   11986 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:26:10.242802   11986 out.go:305] Setting JSON to false
	I0816 22:26:10.242856   11986 mustload.go:65] Loading cluster: pause-20210816222224-6986
	I0816 22:26:10.243299   11986 config.go:177] Loaded profile config "pause-20210816222224-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:26:10.243851   11986 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:26:10.243941   11986 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:26:10.267078   11986 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44127
	I0816 22:26:10.267818   11986 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:26:10.268518   11986 main.go:130] libmachine: Using API Version  1
	I0816 22:26:10.268572   11986 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:26:10.275456   11986 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:26:10.275717   11986 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetState
	I0816 22:26:10.279529   11986 host.go:66] Checking if "pause-20210816222224-6986" exists ...
	I0816 22:26:10.280091   11986 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:26:10.280161   11986 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:26:10.294797   11986 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37701
	I0816 22:26:10.295366   11986 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:26:10.295911   11986 main.go:130] libmachine: Using API Version  1
	I0816 22:26:10.295932   11986 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:26:10.296426   11986 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:26:10.296580   11986 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:26:10.297341   11986 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-20210816222224-6986 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0816 22:26:10.300329   11986 out.go:177] * Pausing node pause-20210816222224-6986 ... 
	I0816 22:26:10.300354   11986 host.go:66] Checking if "pause-20210816222224-6986" exists ...
	I0816 22:26:10.300827   11986 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:26:10.300866   11986 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:26:10.314875   11986 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45235
	I0816 22:26:10.315340   11986 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:26:10.315859   11986 main.go:130] libmachine: Using API Version  1
	I0816 22:26:10.315887   11986 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:26:10.316191   11986 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:26:10.316357   11986 main.go:130] libmachine: (pause-20210816222224-6986) Calling .DriverName
	I0816 22:26:10.316583   11986 ssh_runner.go:149] Run: systemctl --version
	I0816 22:26:10.316616   11986 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHHostname
	I0816 22:26:10.324337   11986 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:26:10.324377   11986 main.go:130] libmachine: (pause-20210816222224-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:64:0e", ip: ""} in network mk-pause-20210816222224-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:22:39 +0000 UTC Type:0 Mac:52:54:00:54:64:0e Iaid: IPaddr:192.168.50.226 Prefix:24 Hostname:pause-20210816222224-6986 Clientid:01:52:54:00:54:64:0e}
	I0816 22:26:10.324398   11986 main.go:130] libmachine: (pause-20210816222224-6986) DBG | domain pause-20210816222224-6986 has defined IP address 192.168.50.226 and MAC address 52:54:00:54:64:0e in network mk-pause-20210816222224-6986
	I0816 22:26:10.324578   11986 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHPort
	I0816 22:26:10.324762   11986 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHKeyPath
	I0816 22:26:10.324938   11986 main.go:130] libmachine: (pause-20210816222224-6986) Calling .GetSSHUsername
	I0816 22:26:10.325095   11986 sshutil.go:53] new ssh client: &{IP:192.168.50.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/pause-20210816222224-6986/id_rsa Username:docker}
	I0816 22:26:10.477860   11986 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:26:10.500230   11986 pause.go:50] kubelet running: true
	I0816 22:26:10.500314   11986 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:26:15.929687   11986 ssh_runner.go:189] Completed: sudo systemctl disable --now kubelet: (5.429342945s)
	I0816 22:26:15.929743   11986 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:26:15.929819   11986 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:26:16.091383   11986 cri.go:76] found id: "f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a"
	I0816 22:26:16.091427   11986 cri.go:76] found id: "e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a"
	I0816 22:26:16.091436   11986 cri.go:76] found id: "2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2"
	I0816 22:26:16.091442   11986 cri.go:76] found id: "53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf"
	I0816 22:26:16.091449   11986 cri.go:76] found id: "76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1"
	I0816 22:26:16.091456   11986 cri.go:76] found id: "69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549"
	I0816 22:26:16.091462   11986 cri.go:76] found id: "825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7"
	I0816 22:26:16.091470   11986 cri.go:76] found id: "7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612"
	I0816 22:26:16.091477   11986 cri.go:76] found id: "9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d"
	I0816 22:26:16.091488   11986 cri.go:76] found id: "97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a"
	I0816 22:26:16.091494   11986 cri.go:76] found id: "3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423"
	I0816 22:26:16.091511   11986 cri.go:76] found id: "8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4"
	I0816 22:26:16.091522   11986 cri.go:76] found id: "28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0"
	I0816 22:26:16.091535   11986 cri.go:76] found id: "a8503bd796d5d979a6e1b8b5154986e8b77de391b4f091211451ea5f52808e52"
	I0816 22:26:16.091552   11986 cri.go:76] found id: "124fa393359f758ea47161b345d2cab4ce486d4473a4caad483449464d44315f"
	I0816 22:26:16.091563   11986 cri.go:76] found id: "8710cefecdbe5d31cd44e9ae3378bc08cbc56001326a1cb38026755196cac7d1"
	I0816 22:26:16.091569   11986 cri.go:76] found id: "38dc61b214a9cbd019de4ca9ab52fb6baf728336de6d715df22b027522ad8b20"
	I0816 22:26:16.091578   11986 cri.go:76] found id: ""
	I0816 22:26:16.091624   11986 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0816 22:26:16.144528   11986 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c","pid":5197,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c/rootfs","created":"2021-08-16T22:25:12.066223998Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_4f138dc7-da0e-4775-b4de-b0f7d616b212"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d","pid":4260,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1718d2a0276cefe490a041b714377b70ea31
374bde370f898789e3a342438c2d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d/rootfs","created":"2021-08-16T22:24:38.416064875Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2","pid":4868,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2/rootfs","created":"2021-08-16T22:25:00.408067872Z","annotations":{"io.kubernetes.cri.container-name":"kube-prox
y","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb","pid":4355,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb/rootfs","created":"2021-08-16T22:24:38.822262087Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf","pid":4782,"stat
us":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf/rootfs","created":"2021-08-16T22:24:53.416372678Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549","pid":4797,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549/rootfs","created":"2021-08-16T22:24:53.509902629Z","annotations":{"io.kubernetes.cri.con
tainer-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1","pid":4789,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1/rootfs","created":"2021-08-16T22:24:53.511150294Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7","pid":4679,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v
2.task/k8s.io/825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7/rootfs","created":"2021-08-16T22:24:52.973956456Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb","pid":5035,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb/rootfs","created":"2021-08-16T22:25:00.913945951Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":
"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-gkxhz_5aa76749-775e-423d-bbf9-680a20a27051"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8","pid":4358,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8/rootfs","created":"2021-08-16T22:24:39.327096963Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e70dd80568a0a134cd147b42c9c85b176b8e57570
012074e1f92a3b1a94bab9a","pid":5094,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a/rootfs","created":"2021-08-16T22:25:01.539181689Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a","pid":4383,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a/rootfs","created":"2021-08-16T22:24:39.213036413Z","an
notations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03","pid":4283,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03/rootfs","created":"2021-08-16T22:24:38.475165162Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-
6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c"},"owner":"root"}]
	I0816 22:26:16.144777   11986 cri.go:113] list returned 13 containers
	I0816 22:26:16.144793   11986 cri.go:116] container: {ID:0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c Status:running}
	I0816 22:26:16.144815   11986 cri.go:118] skipping 0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c - not in ps
	I0816 22:26:16.144822   11986 cri.go:116] container: {ID:1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d Status:running}
	I0816 22:26:16.144829   11986 cri.go:118] skipping 1718d2a0276cefe490a041b714377b70ea31374bde370f898789e3a342438c2d - not in ps
	I0816 22:26:16.144842   11986 cri.go:116] container: {ID:2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2 Status:running}
	I0816 22:26:16.144856   11986 cri.go:116] container: {ID:3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb Status:running}
	I0816 22:26:16.144866   11986 cri.go:118] skipping 3b9459ff3a0d8af1ce5d825b39ad18d0d0623500b3c2d28ac97318b178030cfb - not in ps
	I0816 22:26:16.144872   11986 cri.go:116] container: {ID:53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf Status:running}
	I0816 22:26:16.144882   11986 cri.go:116] container: {ID:69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549 Status:running}
	I0816 22:26:16.144892   11986 cri.go:116] container: {ID:76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1 Status:running}
	I0816 22:26:16.144899   11986 cri.go:116] container: {ID:825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7 Status:running}
	I0816 22:26:16.144908   11986 cri.go:116] container: {ID:c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb Status:running}
	I0816 22:26:16.144920   11986 cri.go:118] skipping c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb - not in ps
	I0816 22:26:16.144925   11986 cri.go:116] container: {ID:d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8 Status:running}
	I0816 22:26:16.144935   11986 cri.go:118] skipping d73b4cafe25f00e2d17c4cb10141a60dff5a3186bd7f33485e1258e0fdfe3de8 - not in ps
	I0816 22:26:16.144941   11986 cri.go:116] container: {ID:e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a Status:running}
	I0816 22:26:16.144950   11986 cri.go:116] container: {ID:fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a Status:running}
	I0816 22:26:16.144957   11986 cri.go:118] skipping fb9f201b2c2e19788c63af1087ce4e32351001f249f6949c6ce64943e930809a - not in ps
	I0816 22:26:16.144965   11986 cri.go:116] container: {ID:feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03 Status:running}
	I0816 22:26:16.144972   11986 cri.go:118] skipping feab707eb735a3ea1ba2975458aac0a9c1fe9d40dc0274c27bac1bf7e1a3dd03 - not in ps
	I0816 22:26:16.145013   11986 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2
	I0816 22:26:16.174149   11986 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2 53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf
	I0816 22:26:16.213106   11986 out.go:177] 
	W0816 22:26:16.213285   11986 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2 53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:26:16Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2 53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:26:16Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0816 22:26:16.213315   11986 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0816 22:26:16.251962   11986 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0816 22:26:16.253948   11986 out.go:177] 

                                                
                                                
** /stderr **
pause_test.go:109: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-20210816222224-6986 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816222224-6986 -n pause-20210816222224-6986
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816222224-6986 -n pause-20210816222224-6986: exit status 2 (335.981611ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210816222224-6986 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210816222224-6986 logs -n 25: exit status 110 (2.209713786s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                  |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                     | multinode-20210816215441-6986          | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:11:11 UTC | Mon, 16 Aug 2021 22:15:19 UTC |
	|         | multinode-20210816215441-6986          |                                        |         |         |                               |                               |
	|         | --wait=true -v=8                       |                                        |         |         |                               |                               |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | --driver=kvm2                          |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| start   | -p                                     | multinode-20210816215441-6986-m03      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:19 UTC | Mon, 16 Aug 2021 22:16:20 UTC |
	|         | multinode-20210816215441-6986-m03      |                                        |         |         |                               |                               |
	|         | --driver=kvm2                          |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| delete  | -p                                     | multinode-20210816215441-6986-m03      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:16:20 UTC | Mon, 16 Aug 2021 22:16:21 UTC |
	|         | multinode-20210816215441-6986-m03      |                                        |         |         |                               |                               |
	| delete  | -p                                     | multinode-20210816215441-6986          | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:16:21 UTC | Mon, 16 Aug 2021 22:16:23 UTC |
	|         | multinode-20210816215441-6986          |                                        |         |         |                               |                               |
	| start   | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:18:07 UTC | Mon, 16 Aug 2021 22:19:45 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | --wait=true --preload=false            |                                        |         |         |                               |                               |
	|         | --driver=kvm2                          |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0           |                                        |         |         |                               |                               |
	| ssh     | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:45 UTC | Mon, 16 Aug 2021 22:19:47 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | -- sudo crictl pull busybox            |                                        |         |         |                               |                               |
	| start   | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:48 UTC | Mon, 16 Aug 2021 22:20:39 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=kvm2         |                                        |         |         |                               |                               |
	|         |  --container-runtime=containerd        |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3           |                                        |         |         |                               |                               |
	| ssh     | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:39 UTC | Mon, 16 Aug 2021 22:20:39 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | -- sudo crictl image ls                |                                        |         |         |                               |                               |
	| delete  | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:39 UTC | Mon, 16 Aug 2021 22:20:40 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	| start   | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:40 UTC | Mon, 16 Aug 2021 22:21:45 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	|         | --memory=2048 --driver=kvm2            |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| stop    | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:46 UTC | Mon, 16 Aug 2021 22:21:46 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	|         | --cancel-scheduled                     |                                        |         |         |                               |                               |
	| stop    | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:58 UTC | Mon, 16 Aug 2021 22:22:05 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	|         | --schedule 5s                          |                                        |         |         |                               |                               |
	| delete  | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:23 UTC | Mon, 16 Aug 2021 22:22:24 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	| delete  | -p kubenet-20210816222224-6986         | kubenet-20210816222224-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:22:25 UTC |
	| delete  | -p false-20210816222225-6986           | false-20210816222225-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:25 UTC | Mon, 16 Aug 2021 22:22:25 UTC |
	| start   | -p                                     | force-systemd-env-20210816222224-6986  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:24:04 UTC |
	|         | force-systemd-env-20210816222224-6986  |                                        |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | -v=5 --driver=kvm2                     |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| -p      | force-systemd-env-20210816222224-6986  | force-systemd-env-20210816222224-6986  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:04 UTC | Mon, 16 Aug 2021 22:24:04 UTC |
	|         | ssh cat /etc/containerd/config.toml    |                                        |         |         |                               |                               |
	| delete  | -p                                     | force-systemd-env-20210816222224-6986  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:04 UTC | Mon, 16 Aug 2021 22:24:05 UTC |
	|         | force-systemd-env-20210816222224-6986  |                                        |         |         |                               |                               |
	| start   | -p pause-20210816222224-6986           | pause-20210816222224-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:24:28 UTC |
	|         | --memory=2048                          |                                        |         |         |                               |                               |
	|         | --install-addons=false                 |                                        |         |         |                               |                               |
	|         | --wait=all --driver=kvm2               |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| start   | -p                                     | kubernetes-upgrade-20210816222225-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:25 UTC | Mon, 16 Aug 2021 22:24:48 UTC |
	|         | kubernetes-upgrade-20210816222225-6986 |                                        |         |         |                               |                               |
	|         | --memory=2200                          |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0           |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=kvm2   |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| stop    | -p                                     | kubernetes-upgrade-20210816222225-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:49 UTC | Mon, 16 Aug 2021 22:24:53 UTC |
	|         | kubernetes-upgrade-20210816222225-6986 |                                        |         |         |                               |                               |
	| start   | -p                                     | offline-containerd-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:25:02 UTC |
	|         | offline-containerd-20210816222224-6986 |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048   |                                        |         |         |                               |                               |
	|         | --wait=true --driver=kvm2              |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| delete  | -p                                     | offline-containerd-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:02 UTC | Mon, 16 Aug 2021 22:25:03 UTC |
	|         | offline-containerd-20210816222224-6986 |                                        |         |         |                               |                               |
	| start   | -p pause-20210816222224-6986           | pause-20210816222224-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:28 UTC | Mon, 16 Aug 2021 22:25:13 UTC |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                     |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| unpause | -p pause-20210816222224-6986           | pause-20210816222224-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:26:09 UTC | Mon, 16 Aug 2021 22:26:10 UTC |
	|         | --alsologtostderr -v=5                 |                                        |         |         |                               |                               |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:25:29
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:25:29.865623   11635 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:25:29.865693   11635 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:25:29.865697   11635 out.go:311] Setting ErrFile to fd 2...
	I0816 22:25:29.865700   11635 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:25:29.865823   11635 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:25:29.866117   11635 out.go:305] Setting JSON to false
	I0816 22:25:29.903940   11635 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":4092,"bootTime":1629148638,"procs":190,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:25:29.904062   11635 start.go:121] virtualization: kvm guest
	I0816 22:25:29.906708   11635 out.go:177] * [stopped-upgrade-20210816222405-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:25:29.908312   11635 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:29.906879   11635 notify.go:169] Checking for updates...
	I0816 22:25:29.909776   11635 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:25:29.911276   11635 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:25:29.811958   10879 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:25:29.817441   10879 start.go:413] Will wait 60s for crictl version
	I0816 22:25:29.817496   10879 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:25:29.853053   10879 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0816 22:25:29.853115   10879 ssh_runner.go:149] Run: containerd --version
	I0816 22:25:29.886777   10879 ssh_runner.go:149] Run: containerd --version
	I0816 22:25:29.912690   11635 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:25:29.913065   11635 config.go:177] Loaded profile config "stopped-upgrade-20210816222405-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0816 22:25:29.913582   11635 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:29.913647   11635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:29.927429   11635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45201
	I0816 22:25:29.927806   11635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:29.928407   11635 main.go:130] libmachine: Using API Version  1
	I0816 22:25:29.928426   11635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:29.928828   11635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:29.928990   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:29.930681   11635 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0816 22:25:29.930718   11635 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:25:29.931039   11635 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:29.931072   11635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:29.942479   11635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40149
	I0816 22:25:29.942868   11635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:29.943307   11635 main.go:130] libmachine: Using API Version  1
	I0816 22:25:29.943323   11635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:29.943763   11635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:29.943943   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:29.977758   11635 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 22:25:29.977793   11635 start.go:278] selected driver: kvm2
	I0816 22:25:29.977800   11635 start.go:751] validating driver "kvm2" against &{Name:stopped-upgrade-20210816222405-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.16.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:stopped-upgra
de-20210816222405-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.139 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:29.977911   11635 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 22:25:29.979236   11635 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:29.979388   11635 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 22:25:29.992607   11635 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0816 22:25:29.992893   11635 cni.go:93] Creating CNI manager for ""
	I0816 22:25:29.992910   11635 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:25:29.992918   11635 start_flags.go:277] config:
	{Name:stopped-upgrade-20210816222405-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.16.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:stopped-upgrade-20210816222405-6986 Namespace:default APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.139 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:29.993026   11635 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:29.994994   11635 out.go:177] * Starting control plane node stopped-upgrade-20210816222405-6986 in cluster stopped-upgrade-20210816222405-6986
	I0816 22:25:29.995015   11635 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	W0816 22:25:30.055078   11635 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.20.0-containerd-overlay2-amd64.tar.lz4 status code: 404
	I0816 22:25:30.055326   11635 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/stopped-upgrade-20210816222405-6986/config.json ...
	I0816 22:25:30.055447   11635 cache.go:108] acquiring lock: {Name:mk44a899d3e13d1e1a41236ca93bfa4c540d90ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055469   11635 cache.go:108] acquiring lock: {Name:mk54690e8b8165106a936f57493f4a5f28a2f038 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055479   11635 cache.go:108] acquiring lock: {Name:mk5fa6434b4b67f17fb247c2a1febaaba95afc21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055522   11635 cache.go:108] acquiring lock: {Name:mk248d78835e4d0dd7deebdde93e709059900376 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055596   11635 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:25:30.055621   11635 start.go:313] acquiring machines lock for stopped-upgrade-20210816222405-6986: {Name:mk808edd60d1305a42bb85791729eff4573dbb15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 22:25:30.055639   11635 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.20.0
	I0816 22:25:30.055653   11635 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.20.0
	I0816 22:25:30.055661   11635 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.20.0
	I0816 22:25:30.055667   11635 cache.go:108] acquiring lock: {Name:mk74554b1ad079cff2e1d01801c80cf158c3d0db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055449   11635 cache.go:108] acquiring lock: {Name:mk21e396b81c69d7c5a1e31157ecfaad7d142ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055784   11635 image.go:133] retrieving image: k8s.gcr.io/coredns:1.7.0
	I0816 22:25:30.055784   11635 cache.go:108] acquiring lock: {Name:mk4f799cc9e7aed39d4f75ef9ab783b3653bcaec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055644   11635 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0816 22:25:30.055857   11635 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.20.0
	I0816 22:25:30.055838   11635 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 404.352µs
	I0816 22:25:30.055869   11635 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0816 22:25:30.055492   11635 cache.go:108] acquiring lock: {Name:mk35426dc160f9577622987ee511aee9c6194a42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055863   11635 cache.go:108] acquiring lock: {Name:mk7034d6c48699d6f6387acd67a2f9aff6580cde Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055899   11635 cache.go:108] acquiring lock: {Name:mk83079d57b39c409d978da9d87f61040fa2879d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055991   11635 image.go:133] retrieving image: k8s.gcr.io/pause:3.2
	I0816 22:25:30.055993   11635 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0816 22:25:30.056005   11635 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0816 22:25:30.056013   11635 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 117.345µs
	I0816 22:25:30.056023   11635 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 216.257µs
	I0816 22:25:30.056038   11635 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0816 22:25:30.056041   11635 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0816 22:25:30.056058   11635 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.13-0
	I0816 22:25:30.069442   11635 image.go:171] found k8s.gcr.io/pause:3.2 locally: &{UncompressedImageCore:0xc0015120b8 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:30.069470   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2
	I0816 22:25:30.125132   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists
	I0816 22:25:30.125180   11635 cache.go:97] cache image "k8s.gcr.io/pause:3.2" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 69.39782ms
	I0816 22:25:30.125195   11635 cache.go:81] save to tar file k8s.gcr.io/pause:3.2 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded
	I0816 22:25:30.603136   11635 image.go:171] found k8s.gcr.io/coredns:1.7.0 locally: &{UncompressedImageCore:0xc0001b80d0 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:30.603171   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0
	I0816 22:25:30.944848   11635 start.go:317] acquired machines lock for "stopped-upgrade-20210816222405-6986" in 889.202815ms
	I0816 22:25:30.944895   11635 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:25:30.944913   11635 fix.go:55] fixHost starting: 
	I0816 22:25:30.945363   11635 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:30.945411   11635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:30.958997   11635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43851
	I0816 22:25:30.959558   11635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:30.960174   11635 main.go:130] libmachine: Using API Version  1
	I0816 22:25:30.960191   11635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:30.960618   11635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:30.961025   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:30.961188   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetState
	I0816 22:25:30.964772   11635 fix.go:108] recreateIfNeeded on stopped-upgrade-20210816222405-6986: state=Stopped err=<nil>
	I0816 22:25:30.964804   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	W0816 22:25:30.964947   11635 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:25:29.920921   10879 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on containerd 1.4.9 ...
	I0816 22:25:29.920959   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetIP
	I0816 22:25:29.926612   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:29.927031   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:29.927059   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:29.927194   10879 ssh_runner.go:149] Run: grep 192.168.116.1	host.minikube.internal$ /etc/hosts
	I0816 22:25:29.931935   10879 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.116.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:29.945360   10879 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 22:25:29.945419   10879 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:29.980071   10879 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:25:29.980092   10879 containerd.go:517] Images already preloaded, skipping extraction
	I0816 22:25:29.980136   10879 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:30.014510   10879 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:25:30.014534   10879 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:25:30.014596   10879 ssh_runner.go:149] Run: sudo crictl info
	I0816 22:25:30.049631   10879 cni.go:93] Creating CNI manager for ""
	I0816 22:25:30.049653   10879 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:25:30.049662   10879 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:25:30.049673   10879 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.116.91 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20210816222225-6986 NodeName:kubernetes-upgrade-20210816222225-6986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.116.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.116.91 Cgro
upDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:25:30.049804   10879 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.116.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "kubernetes-upgrade-20210816222225-6986"
	  kubeletExtraArgs:
	    node-ip: 192.168.116.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.116.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:25:30.049880   10879 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-20210816222225-6986 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.116.91 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210816222225-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:25:30.049928   10879 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0816 22:25:30.057977   10879 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:25:30.058039   10879 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:25:30.069033   10879 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (559 bytes)
	I0816 22:25:30.084505   10879 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0816 22:25:30.099259   10879 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0816 22:25:30.117789   10879 ssh_runner.go:149] Run: grep 192.168.116.91	control-plane.minikube.internal$ /etc/hosts
	I0816 22:25:30.123120   10879 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.116.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:30.137790   10879 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986 for IP: 192.168.116.91
	I0816 22:25:30.137839   10879 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:25:30.137860   10879 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:25:30.137924   10879 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/client.key
	I0816 22:25:30.137959   10879 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/apiserver.key.0bcaee26
	I0816 22:25:30.137982   10879 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/proxy-client.key
	I0816 22:25:30.138107   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem (1338 bytes)
	W0816 22:25:30.138164   10879 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
	I0816 22:25:30.138180   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 22:25:30.138217   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:25:30.138260   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:25:30.138286   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1675 bytes)
	I0816 22:25:30.138335   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:25:30.139672   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:25:30.162501   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 22:25:30.187988   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:25:30.208990   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 22:25:30.232139   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:25:30.260638   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 22:25:30.287352   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:25:30.316722   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 22:25:30.346450   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
	I0816 22:25:30.369471   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
	I0816 22:25:30.397914   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:25:30.433581   10879 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:25:30.454657   10879 ssh_runner.go:149] Run: openssl version
	I0816 22:25:30.463107   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
	I0816 22:25:30.475906   10879 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6986.pem
	I0816 22:25:30.482988   10879 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:49 /usr/share/ca-certificates/6986.pem
	I0816 22:25:30.483059   10879 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
	I0816 22:25:30.492438   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
	I0816 22:25:30.507780   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
	I0816 22:25:30.522931   10879 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/69862.pem
	I0816 22:25:30.529692   10879 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:49 /usr/share/ca-certificates/69862.pem
	I0816 22:25:30.529753   10879 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
	I0816 22:25:30.540308   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:25:30.555395   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:25:30.571425   10879 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:30.577689   10879 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:42 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:30.577743   10879 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:30.588310   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:25:30.599536   10879 kubeadm.go:390] StartCluster: {Name:kubernetes-upgrade-20210816222225-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210816222225-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.116.91 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:30.599646   10879 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0816 22:25:30.599708   10879 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:25:30.645578   10879 cri.go:76] found id: ""
	I0816 22:25:30.645662   10879 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:25:30.656753   10879 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:25:30.656775   10879 kubeadm.go:600] restartCluster start
	I0816 22:25:30.656823   10879 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:25:30.665356   10879 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:30.666456   10879 kubeconfig.go:117] verify returned: extract IP: "kubernetes-upgrade-20210816222225-6986" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:30.666789   10879 kubeconfig.go:128] "kubernetes-upgrade-20210816222225-6986" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:25:30.667454   10879 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:25:30.668301   10879 kapi.go:59] client config for kubernetes-upgrade-20210816222225-6986: &rest.Config{Host:"https://192.168.116.91:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/k
ubernetes-upgrade-20210816222225-6986/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 22:25:30.670154   10879 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:25:30.680113   10879 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta2
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.116.91
	@@ -17,7 +17,7 @@
	     node-ip: 192.168.116.91
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta2
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.116.91"]
	@@ -31,7 +31,7 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-20210816222225-6986
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 dns:
	   type: CoreDNS
	@@ -39,8 +39,8 @@
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.116.91:2381
	-kubernetesVersion: v1.14.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.22.0-rc.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0816 22:25:30.680129   10879 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:25:30.680144   10879 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:25:30.680191   10879 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:25:30.726916   10879 cri.go:76] found id: ""
	I0816 22:25:30.726997   10879 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:25:30.746591   10879 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:25:30.758779   10879 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:25:30.758838   10879 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:25:30.769229   10879 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:25:30.769260   10879 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:30.999868   10879 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:32.699195   10879 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.699296673s)
	I0816 22:25:32.699238   10879 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:33.156614   10879 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:33.351483   10879 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:33.492135   10879 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:25:33.492214   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:34.016071   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:30.969443   11635 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-20210816222405-6986" ...
	I0816 22:25:30.969474   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .Start
	I0816 22:25:30.969661   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Ensuring networks are active...
	I0816 22:25:30.972266   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Ensuring network default is active
	I0816 22:25:30.972626   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Ensuring network minikube-net is active
	I0816 22:25:30.973378   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Getting domain xml...
	I0816 22:25:30.975969   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Creating domain...
	I0816 22:25:31.337824   11635 image.go:171] found k8s.gcr.io/kube-scheduler:v1.20.0 locally: &{UncompressedImageCore:0xc0005f0060 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:31.337868   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0
	I0816 22:25:31.505137   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Waiting to get IP...
	I0816 22:25:31.505762   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:31.507077   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has current primary IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:31.507133   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Found IP for machine: 192.168.94.139
	I0816 22:25:31.507148   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Reserving static IP address...
	I0816 22:25:31.507652   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "stopped-upgrade-20210816222405-6986", mac: "52:54:00:48:08:ec", ip: "192.168.94.139"} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:24:28 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:31.507684   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-20210816222405-6986", mac: "52:54:00:48:08:ec", ip: "192.168.94.139"}
	I0816 22:25:31.507704   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | Getting to WaitForSSH function...
	I0816 22:25:31.507722   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Reserved static IP address: 192.168.94.139
	I0816 22:25:31.507732   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Waiting for SSH to be available...
	I0816 22:25:31.513971   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:31.514423   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:24:28 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:31.514456   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:31.514613   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | Using SSH client type: external
	I0816 22:25:31.514740   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816222405-6986/id_rsa (-rw-------)
	I0816 22:25:31.514792   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.94.139 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816222405-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:25:31.514855   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | About to run SSH command:
	I0816 22:25:31.514871   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | exit 0
	I0816 22:25:32.653039   11635 image.go:171] found k8s.gcr.io/kube-controller-manager:v1.20.0 locally: &{UncompressedImageCore:0xc000114030 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:32.653105   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0
	I0816 22:25:32.791720   11635 image.go:171] found k8s.gcr.io/kube-apiserver:v1.20.0 locally: &{UncompressedImageCore:0xc001512008 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:32.791795   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0
	I0816 22:25:34.107008   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 exists
	I0816 22:25:34.107061   11635 cache.go:97] cache image "k8s.gcr.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0" took 4.051394985s
	I0816 22:25:34.107085   11635 cache.go:81] save to tar file k8s.gcr.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 succeeded
	I0816 22:25:34.315350   11635 image.go:171] found k8s.gcr.io/etcd:3.4.13-0 locally: &{UncompressedImageCore:0xc000114050 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:34.315401   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0
	I0816 22:25:34.340626   11635 image.go:171] found k8s.gcr.io/kube-proxy:v1.20.0 locally: &{UncompressedImageCore:0xc000010208 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:34.340673   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0
	I0816 22:25:34.809104   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 exists
	I0816 22:25:34.809158   11635 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0" took 4.753722894s
	I0816 22:25:34.809180   11635 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 succeeded
	I0816 22:25:34.516083   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:35.015857   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:35.519104   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:36.016162   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:36.515658   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:37.015674   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:37.515739   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:38.015673   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:38.515650   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:39.015668   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:39.515980   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:40.015682   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:40.515733   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:41.016134   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:41.516902   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:42.015609   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:42.516119   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:43.016137   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:43.515737   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:44.019076   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:40.829030   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 exists
	I0816 22:25:40.829145   11635 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0" took 10.773666591s
	I0816 22:25:40.829193   11635 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 succeeded
	I0816 22:25:41.800993   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 exists
	I0816 22:25:41.801047   11635 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0" took 11.745601732s
	I0816 22:25:41.801066   11635 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 succeeded
	I0816 22:25:43.868728   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 exists
	I0816 22:25:43.868783   11635 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0" took 13.813268232s
	I0816 22:25:43.868801   11635 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 succeeded
	I0816 22:25:44.515937   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:45.016224   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:45.516460   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:46.016178   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:46.516524   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:47.015786   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:47.516658   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:48.015708   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:48.515976   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:49.016470   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:50.674039   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | SSH cmd err, output: <nil>: 
	I0816 22:25:50.674356   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetConfigRaw
	I0816 22:25:50.674991   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetIP
	I0816 22:25:50.680301   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.680676   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:50.680700   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.680937   11635 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/stopped-upgrade-20210816222405-6986/config.json ...
	I0816 22:25:50.681113   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:50.681286   11635 machine.go:88] provisioning docker machine ...
	I0816 22:25:50.681307   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:50.681460   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetMachineName
	I0816 22:25:50.681591   11635 buildroot.go:166] provisioning hostname "stopped-upgrade-20210816222405-6986"
	I0816 22:25:50.681630   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetMachineName
	I0816 22:25:50.681772   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:50.686529   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.686855   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:50.686891   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.687006   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHPort
	I0816 22:25:50.687142   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:50.687255   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:50.687342   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHUsername
	I0816 22:25:50.687442   11635 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:50.687632   11635 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.94.139 22 <nil> <nil>}
	I0816 22:25:50.687655   11635 main.go:130] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-20210816222405-6986 && echo "stopped-upgrade-20210816222405-6986" | sudo tee /etc/hostname
	I0816 22:25:50.813329   11635 main.go:130] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-20210816222405-6986
	
	I0816 22:25:50.813359   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:50.818466   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.818799   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:50.818835   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.818990   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHPort
	I0816 22:25:50.819183   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:50.819328   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:50.819469   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHUsername
	I0816 22:25:50.819617   11635 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:50.819779   11635 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.94.139 22 <nil> <nil>}
	I0816 22:25:50.819808   11635 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-20210816222405-6986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-20210816222405-6986/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-20210816222405-6986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:25:50.843338   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 exists
	I0816 22:25:50.843375   11635 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0" took 20.787884619s
	I0816 22:25:50.843386   11635 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 succeeded
	I0816 22:25:50.843403   11635 cache.go:88] Successfully saved all images to host disk.
	I0816 22:25:50.936578   11635 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:25:50.936617   11635 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:25:50.936644   11635 buildroot.go:174] setting up certificates
	I0816 22:25:50.936656   11635 provision.go:83] configureAuth start
	I0816 22:25:50.936668   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetMachineName
	I0816 22:25:50.936926   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetIP
	I0816 22:25:50.941824   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.942162   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:50.942193   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.942301   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:50.946653   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.946938   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:50.946977   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.947097   11635 provision.go:138] copyHostCerts
	I0816 22:25:50.947147   11635 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:25:50.947156   11635 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:25:50.947203   11635 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:25:50.947271   11635 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:25:50.947280   11635 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:25:50.947296   11635 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
	I0816 22:25:50.947338   11635 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:25:50.947349   11635 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:25:50.947365   11635 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:25:50.947403   11635 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-20210816222405-6986 san=[192.168.94.139 192.168.94.139 localhost 127.0.0.1 minikube stopped-upgrade-20210816222405-6986]
	I0816 22:25:51.032615   11635 provision.go:172] copyRemoteCerts
	I0816 22:25:51.032661   11635 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:25:51.032682   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:51.037604   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.037874   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:51.037905   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.038009   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHPort
	I0816 22:25:51.038148   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:51.038246   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHUsername
	I0816 22:25:51.038339   11635 sshutil.go:53] new ssh client: &{IP:192.168.94.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816222405-6986/id_rsa Username:docker}
	I0816 22:25:51.123210   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:25:51.139587   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0816 22:25:51.155341   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:25:51.171555   11635 provision.go:86] duration metric: configureAuth took 234.887855ms
	I0816 22:25:51.171579   11635 buildroot.go:189] setting minikube options for container-runtime
	I0816 22:25:51.171722   11635 config.go:177] Loaded profile config "stopped-upgrade-20210816222405-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0816 22:25:51.171746   11635 machine.go:91] provisioned docker machine in 490.44902ms
	I0816 22:25:51.171754   11635 start.go:267] post-start starting for "stopped-upgrade-20210816222405-6986" (driver="kvm2")
	I0816 22:25:51.171760   11635 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:25:51.171782   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:51.172055   11635 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:25:51.172077   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:51.176933   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.177267   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:51.177298   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.177441   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHPort
	I0816 22:25:51.177602   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:51.177724   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHUsername
	I0816 22:25:51.177820   11635 sshutil.go:53] new ssh client: &{IP:192.168.94.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816222405-6986/id_rsa Username:docker}
	I0816 22:25:51.280843   11635 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:25:51.285664   11635 info.go:137] Remote host: Buildroot 2020.02.8
	I0816 22:25:51.285688   11635 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:25:51.285748   11635 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:25:51.285858   11635 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0816 22:25:51.285968   11635 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:25:51.293397   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:25:51.309338   11635 start.go:270] post-start completed in 137.570449ms
	I0816 22:25:51.309377   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:51.309623   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:51.315839   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.316232   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:51.316258   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.316441   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHPort
	I0816 22:25:51.316643   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:51.316819   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:51.316941   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHUsername
	I0816 22:25:51.317135   11635 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:51.317300   11635 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.94.139 22 <nil> <nil>}
	I0816 22:25:51.317315   11635 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0816 22:25:51.429012   11635 main.go:130] libmachine: SSH cmd err, output: <nil>: 1629152751.360423899
	
	I0816 22:25:51.429036   11635 fix.go:212] guest clock: 1629152751.360423899
	I0816 22:25:51.429046   11635 fix.go:225] Guest: 2021-08-16 22:25:51.360423899 +0000 UTC Remote: 2021-08-16 22:25:51.30960232 +0000 UTC m=+21.494401500 (delta=50.821579ms)
	I0816 22:25:51.429069   11635 fix.go:196] guest clock delta is within tolerance: 50.821579ms
	I0816 22:25:51.429077   11635 fix.go:57] fixHost completed within 20.484163709s
	I0816 22:25:51.429086   11635 start.go:80] releasing machines lock for "stopped-upgrade-20210816222405-6986", held for 20.484204502s
	I0816 22:25:51.429135   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:51.429381   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetIP
	I0816 22:25:51.434740   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.435145   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:51.435177   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.435313   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:51.435469   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:51.435909   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:51.436161   11635 ssh_runner.go:149] Run: systemctl --version
	I0816 22:25:51.436188   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:51.436224   11635 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:25:51.436266   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:51.441919   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.442250   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:51.442273   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.442407   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHPort
	I0816 22:25:51.442580   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:51.442707   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHUsername
	I0816 22:25:51.442844   11635 sshutil.go:53] new ssh client: &{IP:192.168.94.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816222405-6986/id_rsa Username:docker}
	I0816 22:25:51.442940   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.443308   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:51.443347   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.443442   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHPort
	I0816 22:25:51.443577   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:51.443703   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHUsername
	I0816 22:25:51.443818   11635 sshutil.go:53] new ssh client: &{IP:192.168.94.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816222405-6986/id_rsa Username:docker}
	I0816 22:25:51.580011   11635 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0816 22:25:51.580081   11635 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0816 22:25:51.617346   11635 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0816 22:25:51.629075   11635 docker.go:153] disabling docker service ...
	I0816 22:25:51.629131   11635 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:25:51.640426   11635 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:25:51.649102   11635 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:25:51.808471   11635 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:25:51.947554   11635 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:25:51.958967   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:25:51.978504   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuMiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CgoJW3BsdWdpbnMuImlvLmNvb
nRhaW5lcmQuZ3JwYy52MS5jcmkiXQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lc10KICAgICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmMub3B0aW9uc10KICAgICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZF0KICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgIFtwbHVnaW5zLmNyaS5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQuZCIKI
CAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0816 22:25:51.992652   11635 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:25:51.998569   11635 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:25:51.998615   11635 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:25:52.014222   11635 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:25:52.020197   11635 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:25:52.145893   11635 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:25:52.190011   11635 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0816 22:25:52.190112   11635 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:25:52.195055   11635 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0816 22:25:53.300309   11635 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:25:53.306277   11635 start.go:413] Will wait 60s for crictl version
	I0816 22:25:53.306323   11635 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:25:53.324774   11635 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.3
	RuntimeApiVersion:  v1alpha2
	I0816 22:25:53.324831   11635 ssh_runner.go:149] Run: containerd --version
	I0816 22:25:53.361138   11635 ssh_runner.go:149] Run: containerd --version
	I0816 22:25:49.515642   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:50.015846   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:50.516027   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:51.015729   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:51.516379   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:52.016301   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:52.516625   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:53.015666   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:53.516566   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:54.016518   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:53.402219   11635 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.4.3 ...
	I0816 22:25:53.402261   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetIP
	I0816 22:25:53.407765   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:53.408040   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:53.408078   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:53.408229   11635 ssh_runner.go:149] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0816 22:25:53.412479   11635 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:53.424358   11635 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0816 22:25:53.424401   11635 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:53.448218   11635 containerd.go:609] couldn't find preloaded image for "gcr.io/k8s-minikube/storage-provisioner:v5". assuming images are not preloaded.
	I0816 22:25:53.448242   11635 cache_images.go:78] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.20.0 k8s.gcr.io/kube-controller-manager:v1.20.0 k8s.gcr.io/kube-scheduler:v1.20.0 k8s.gcr.io/kube-proxy:v1.20.0 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0816 22:25:53.448286   11635 image.go:133] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0816 22:25:53.448315   11635 image.go:133] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:25:53.448336   11635 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.20.0
	I0816 22:25:53.448339   11635 image.go:133] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0816 22:25:53.448315   11635 image.go:133] retrieving image: k8s.gcr.io/pause:3.2
	I0816 22:25:53.448407   11635 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.13-0
	I0816 22:25:53.448419   11635 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.20.0
	I0816 22:25:53.448384   11635 image.go:133] retrieving image: k8s.gcr.io/coredns:1.7.0
	I0816 22:25:53.448483   11635 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.20.0
	I0816 22:25:53.448495   11635 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.20.0
	I0816 22:25:53.461290   11635 image.go:171] found k8s.gcr.io/pause:3.2 locally: &{UncompressedImageCore:0xc000010aa0 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:53.461345   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.2 | grep 80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c"
	I0816 22:25:53.970932   11635 cache_images.go:106] "k8s.gcr.io/pause:3.2" needs transfer: "k8s.gcr.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 22:25:53.970997   11635 cri.go:205] Removing image: k8s.gcr.io/pause:3.2
	I0816 22:25:53.971051   11635 ssh_runner.go:149] Run: which crictl
	I0816 22:25:53.982673   11635 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/pause:3.2
	I0816 22:25:53.998453   11635 image.go:171] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{UncompressedImageCore:0xc000010ac8 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:53.998519   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5 | grep 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0816 22:25:54.075847   11635 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2
	I0816 22:25:54.075942   11635 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.2
	I0816 22:25:54.205435   11635 image.go:171] found index.docker.io/kubernetesui/metrics-scraper:v1.0.4 locally: &{UncompressedImageCore:0xc0001141d0 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:54.205506   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/metrics-scraper:v1.0.4 | grep 86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4"
	I0816 22:25:54.279275   11635 image.go:171] found k8s.gcr.io/coredns:1.7.0 locally: &{UncompressedImageCore:0xc000114020 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:54.279354   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns:1.7.0 | grep bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16"
	I0816 22:25:54.516398   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:55.015670   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:55.515685   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:56.016261   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:56.516304   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:57.016559   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:57.515664   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:58.016621   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:58.515698   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:59.015770   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:54.977194   11635 ssh_runner.go:306] existence check for /var/lib/minikube/images/pause_3.2: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.2': No such file or directory
	I0816 22:25:54.977231   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2 --> /var/lib/minikube/images/pause_3.2 (325632 bytes)
	I0816 22:25:54.977302   11635 cache_images.go:106] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0816 22:25:54.977338   11635 cri.go:205] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:25:54.977370   11635 ssh_runner.go:149] Run: which crictl
	I0816 22:25:55.080433   11635 containerd.go:280] Loading image: /var/lib/minikube/images/pause_3.2
	I0816 22:25:55.080518   11635 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.2
	I0816 22:25:55.246428   11635 image.go:171] found k8s.gcr.io/kube-scheduler:v1.20.0 locally: &{UncompressedImageCore:0xc0012ae018 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:55.246517   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.20.0 | grep 3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899"
	I0816 22:25:55.341693   11635 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/metrics-scraper:v1.0.4 | grep 86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4": (1.136152354s)
	I0816 22:25:55.341751   11635 cache_images.go:106] "docker.io/kubernetesui/metrics-scraper:v1.0.4" needs transfer: "docker.io/kubernetesui/metrics-scraper:v1.0.4" does not exist at hash "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4" in container runtime
	I0816 22:25:55.341784   11635 cri.go:205] Removing image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0816 22:25:55.341831   11635 ssh_runner.go:149] Run: which crictl
	I0816 22:25:55.428834   11635 ssh_runner.go:149] Run: sudo /bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:25:55.428936   11635 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns:1.7.0 | grep bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16": (1.149568594s)
	I0816 22:25:55.428974   11635 cache_images.go:106] "k8s.gcr.io/coredns:1.7.0" needs transfer: "k8s.gcr.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 22:25:55.429009   11635 cri.go:205] Removing image: k8s.gcr.io/coredns:1.7.0
	I0816 22:25:55.429056   11635 ssh_runner.go:149] Run: which crictl
	I0816 22:25:55.592474   11635 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2 from cache
	I0816 22:25:56.011636   11635 ssh_runner.go:149] Run: sudo /bin/crictl rmi docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0816 22:25:56.011761   11635 cache_images.go:106] "k8s.gcr.io/kube-scheduler:v1.20.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 22:25:56.011800   11635 cri.go:205] Removing image: k8s.gcr.io/kube-scheduler:v1.20.0
	I0816 22:25:56.011831   11635 ssh_runner.go:149] Run: which crictl
	I0816 22:25:56.011930   11635 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/coredns:1.7.0
	I0816 22:25:56.012015   11635 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 22:25:56.012087   11635 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0816 22:25:56.037389   11635 ssh_runner.go:306] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0816 22:25:56.037424   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (10569216 bytes)
	I0816 22:25:56.164064   11635 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.20.0
	I0816 22:25:56.164139   11635 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0
	I0816 22:25:56.164233   11635 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_1.7.0
	I0816 22:25:56.223803   11635 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4
	I0816 22:25:56.223987   11635 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0816 22:25:56.276689   11635 containerd.go:280] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 22:25:56.276940   11635 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0816 22:25:56.518841   11635 image.go:171] found k8s.gcr.io/kube-apiserver:v1.20.0 locally: &{UncompressedImageCore:0xc0012ae030 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:56.519003   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.20.0 | grep ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99"
	I0816 22:25:56.539546   11635 image.go:171] found k8s.gcr.io/kube-controller-manager:v1.20.0 locally: &{UncompressedImageCore:0xc0001b80d0 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:56.539619   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.20.0 | grep b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080"
	I0816 22:25:56.600467   11635 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0
	I0816 22:25:56.600552   11635 ssh_runner.go:306] existence check for /var/lib/minikube/images/coredns_1.7.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_1.7.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_1.7.0': No such file or directory
	I0816 22:25:56.600588   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 --> /var/lib/minikube/images/coredns_1.7.0 (16093184 bytes)
	I0816 22:25:56.600662   11635 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.20.0
	I0816 22:25:56.600681   11635 ssh_runner.go:306] existence check for /var/lib/minikube/images/metrics-scraper_v1.0.4: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.4': No such file or directory
	I0816 22:25:56.600753   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 --> /var/lib/minikube/images/metrics-scraper_v1.0.4 (17437696 bytes)
	I0816 22:25:57.931744   11635 image.go:171] found k8s.gcr.io/kube-proxy:v1.20.0 locally: &{UncompressedImageCore:0xc0012ae028 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:57.931827   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.20.0 | grep 10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc"
	I0816 22:25:58.267238   11635 image.go:171] found k8s.gcr.io/etcd:3.4.13-0 locally: &{UncompressedImageCore:0xc0012ae030 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:58.267315   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.4.13-0 | grep 0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934"
	I0816 22:25:58.802005   11635 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.20.0 | grep ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99": (2.28292238s)
	I0816 22:25:58.802034   11635 ssh_runner.go:189] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.20.0: (2.201335079s)
	I0816 22:25:58.802028   11635 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.20.0 | grep b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080": (2.262384808s)
	I0816 22:25:58.802063   11635 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-scheduler_v1.20.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.20.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.20.0': No such file or directory
	I0816 22:25:58.802086   11635 cache_images.go:106] "k8s.gcr.io/kube-controller-manager:v1.20.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 22:25:58.802108   11635 cache_images.go:106] "k8s.gcr.io/etcd:3.4.13-0" needs transfer: "k8s.gcr.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 22:25:58.802116   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 --> /var/lib/minikube/images/kube-scheduler_v1.20.0 (16235008 bytes)
	I0816 22:25:58.802125   11635 cri.go:205] Removing image: k8s.gcr.io/kube-controller-manager:v1.20.0
	I0816 22:25:58.802143   11635 cri.go:205] Removing image: k8s.gcr.io/etcd:3.4.13-0
	I0816 22:25:58.802168   11635 ssh_runner.go:149] Run: which crictl
	I0816 22:25:58.802008   11635 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5: (2.524986382s)
	I0816 22:25:58.802185   11635 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 22:25:58.802081   11635 cache_images.go:106] "k8s.gcr.io/kube-proxy:v1.20.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 22:25:58.802207   11635 containerd.go:280] Loading image: /var/lib/minikube/images/coredns_1.7.0
	I0816 22:25:58.802075   11635 cache_images.go:106] "k8s.gcr.io/kube-apiserver:v1.20.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 22:25:58.802230   11635 cri.go:205] Removing image: k8s.gcr.io/kube-apiserver:v1.20.0
	I0816 22:25:58.802208   11635 cri.go:205] Removing image: k8s.gcr.io/kube-proxy:v1.20.0
	I0816 22:25:58.802249   11635 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_1.7.0
	I0816 22:25:58.802255   11635 ssh_runner.go:149] Run: which crictl
	I0816 22:25:58.802187   11635 ssh_runner.go:149] Run: which crictl
	I0816 22:25:58.802295   11635 ssh_runner.go:149] Run: which crictl
	I0816 22:25:58.829751   11635 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/etcd:3.4.13-0
	I0816 22:25:58.829766   11635 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-proxy:v1.20.0
	I0816 22:25:58.829831   11635 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.20.0
	I0816 22:25:58.829880   11635 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.20.0
	I0816 22:25:59.002701   11635 image.go:171] found index.docker.io/kubernetesui/dashboard:v2.1.0 locally: &{UncompressedImageCore:0xc0001b81d0 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:59.002760   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/dashboard:v2.1.0 | grep 9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db"
	I0816 22:25:59.516257   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.015687   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.516496   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:01.016103   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:01.516235   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:02.015652   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:02.515708   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:03.015981   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:03.516508   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:04.015735   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.792096   11635 ssh_runner.go:189] Completed: sudo /bin/crictl rmi k8s.gcr.io/etcd:3.4.13-0: (1.962308151s)
	I0816 22:26:00.792126   11635 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0
	I0816 22:26:00.792155   11635 ssh_runner.go:189] Completed: sudo /bin/crictl rmi k8s.gcr.io/kube-proxy:v1.20.0: (1.962353995s)
	I0816 22:26:00.792172   11635 ssh_runner.go:189] Completed: sudo /bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.20.0: (1.962317588s)
	I0816 22:26:00.792195   11635 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0
	I0816 22:26:00.792204   11635 ssh_runner.go:189] Completed: sudo /bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.20.0: (1.962305146s)
	I0816 22:26:00.792215   11635 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0
	I0816 22:26:00.792223   11635 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.4.13-0
	I0816 22:26:00.792269   11635 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.20.0
	I0816 22:26:00.792280   11635 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.20.0
	I0816 22:26:00.792285   11635 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/dashboard:v2.1.0 | grep 9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db": (1.789512986s)
	I0816 22:26:00.792179   11635 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0
	I0816 22:26:00.792315   11635 cache_images.go:106] "docker.io/kubernetesui/dashboard:v2.1.0" needs transfer: "docker.io/kubernetesui/dashboard:v2.1.0" does not exist at hash "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db" in container runtime
	I0816 22:26:00.792342   11635 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.20.0
	I0816 22:26:00.792348   11635 cri.go:205] Removing image: docker.io/kubernetesui/dashboard:v2.1.0
	I0816 22:26:00.792375   11635 ssh_runner.go:149] Run: which crictl
	I0816 22:26:00.792536   11635 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_1.7.0: (1.990265666s)
	I0816 22:26:00.792554   11635 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 from cache
	I0816 22:26:00.792586   11635 containerd.go:280] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0816 22:26:00.792613   11635 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0816 22:26:00.801855   11635 ssh_runner.go:149] Run: sudo /bin/crictl rmi docker.io/kubernetesui/dashboard:v2.1.0
	I0816 22:26:00.806530   11635 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-apiserver_v1.20.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.20.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.20.0': No such file or directory
	I0816 22:26:00.806555   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 --> /var/lib/minikube/images/kube-apiserver_v1.20.0 (36263424 bytes)
	I0816 22:26:00.818668   11635 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-proxy_v1.20.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.20.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.20.0': No such file or directory
	I0816 22:26:00.818702   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 --> /var/lib/minikube/images/kube-proxy_v1.20.0 (54292992 bytes)
	I0816 22:26:00.818715   11635 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-controller-manager_v1.20.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.20.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.20.0': No such file or directory
	I0816 22:26:00.818740   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 --> /var/lib/minikube/images/kube-controller-manager_v1.20.0 (34932224 bytes)
	I0816 22:26:00.818755   11635 ssh_runner.go:306] existence check for /var/lib/minikube/images/etcd_3.4.13-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.4.13-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.4.13-0': No such file or directory
	I0816 22:26:00.818785   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 --> /var/lib/minikube/images/etcd_3.4.13-0 (98416128 bytes)
	I0816 22:26:03.153476   11635 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/metrics-scraper_v1.0.4: (2.360838076s)
	I0816 22:26:03.153518   11635 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 from cache
	I0816 22:26:03.153527   11635 ssh_runner.go:189] Completed: sudo /bin/crictl rmi docker.io/kubernetesui/dashboard:v2.1.0: (2.351640753s)
	I0816 22:26:03.153547   11635 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0
	I0816 22:26:03.153557   11635 containerd.go:280] Loading image: /var/lib/minikube/images/kube-scheduler_v1.20.0
	I0816 22:26:03.153609   11635 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.20.0
	I0816 22:26:03.153617   11635 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.1.0
	I0816 22:26:03.598649   11635 ssh_runner.go:306] existence check for /var/lib/minikube/images/dashboard_v2.1.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.1.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/dashboard_v2.1.0': No such file or directory
	I0816 22:26:03.598692   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 --> /var/lib/minikube/images/dashboard_v2.1.0 (78078976 bytes)
	I0816 22:26:03.598720   11635 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 from cache
	I0816 22:26:03.598766   11635 containerd.go:280] Loading image: /var/lib/minikube/images/kube-apiserver_v1.20.0
	I0816 22:26:03.598815   11635 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.20.0
	I0816 22:26:04.686988   11635 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.20.0: (1.088145061s)
	I0816 22:26:04.687017   11635 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 from cache
	I0816 22:26:04.687041   11635 containerd.go:280] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.20.0
	I0816 22:26:04.687104   11635 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.20.0
	I0816 22:26:04.516518   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:05.015801   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:05.515762   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:06.015635   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:06.515924   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:07.016271   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:07.515686   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:08.015901   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:08.516494   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:09.016345   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:05.647509   11635 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 from cache
	I0816 22:26:05.647560   11635 containerd.go:280] Loading image: /var/lib/minikube/images/kube-proxy_v1.20.0
	I0816 22:26:05.647610   11635 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.20.0
	I0816 22:26:09.515714   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:10.016162   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:10.516131   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:11.016215   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:11.036151   10879 api_server.go:70] duration metric: took 37.544016672s to wait for apiserver process to appear ...
	I0816 22:26:11.036182   10879 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:26:11.036193   10879 api_server.go:239] Checking apiserver healthz at https://192.168.116.91:8443/healthz ...
	I0816 22:26:09.951929   11635 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.20.0: (4.304292322s)
	I0816 22:26:09.951958   11635 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 from cache
	I0816 22:26:09.951981   11635 containerd.go:280] Loading image: /var/lib/minikube/images/etcd_3.4.13-0
	I0816 22:26:09.952023   11635 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.4.13-0
	I0816 22:26:12.919196   11635 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.4.13-0: (2.96714421s)
	I0816 22:26:12.919226   11635 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 from cache
	I0816 22:26:12.919245   11635 containerd.go:280] Loading image: /var/lib/minikube/images/dashboard_v2.1.0
	I0816 22:26:12.919292   11635 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/dashboard_v2.1.0
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	f04c445038901       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   0e10d9862204b
	e70dd80568a0a       296a6d5035e2d       About a minute ago   Running             coredns                   1                   c649190b7c07d
	2585772c8a261       adb2816ea823a       About a minute ago   Running             kube-proxy                2                   d73b4cafe25f0
	53780b2759956       3d174f00aa39e       About a minute ago   Running             kube-apiserver            2                   fb9f201b2c2e1
	76fef890edebe       6be0dc1302e30       About a minute ago   Running             kube-scheduler            2                   1718d2a0276ce
	69a7fab4848c4       0369cf4303ffd       About a minute ago   Running             etcd                      2                   3b9459ff3a0d8
	825e79d62718c       bc2bb319a7038       About a minute ago   Running             kube-controller-manager   2                   feab707eb735a
	7626b842ef886       3d174f00aa39e       About a minute ago   Created             kube-apiserver            1                   fb9f201b2c2e1
	9d9f34b35e099       adb2816ea823a       About a minute ago   Created             kube-proxy                1                   d73b4cafe25f0
	97c4cc3614116       6be0dc1302e30       About a minute ago   Created             kube-scheduler            1                   1718d2a0276ce
	3644e35e40a2f       0369cf4303ffd       About a minute ago   Created             etcd                      1                   3b9459ff3a0d8
	8c5f2c007cff4       bc2bb319a7038       About a minute ago   Created             kube-controller-manager   1                   feab707eb735a
	28c7161cd49a4       296a6d5035e2d       2 minutes ago        Exited              coredns                   0                   05c2427240818
	a8503bd796d5d       adb2816ea823a       2 minutes ago        Exited              kube-proxy                0                   a86c3b6ee3a70
	124fa393359f7       0369cf4303ffd       3 minutes ago        Exited              etcd                      0                   94a493a65b593
	8710cefecdbe5       6be0dc1302e30       3 minutes ago        Exited              kube-scheduler            0                   982e66890a90d
	38dc61b214a9c       3d174f00aa39e       3 minutes ago        Exited              kube-apiserver            0                   630ed9d4644e9
	
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2021-08-16 22:22:35 UTC, end at Mon 2021-08-16 22:26:17 UTC. --
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.233198734Z" level=info msg="StartContainer for \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.443953465Z" level=info msg="StopPodSandbox for \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.444769081Z" level=info msg="Container to stop \"28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.461877463Z" level=info msg="StartContainer for \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\" returns successfully"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536079353Z" level=info msg="TearDown network for sandbox \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\" successfully"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536191167Z" level=info msg="StopPodSandbox for \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\" returns successfully"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536962082Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-gkxhz,Uid:5aa76749-775e-423d-bbf9-680a20a27051,Namespace:kube-system,Attempt:1,}"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.776744568Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb pid=5007
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.290447333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-gkxhz,Uid:5aa76749-775e-423d-bbf9-680a20a27051,Namespace:kube-system,Attempt:1,} returns sandbox id \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\""
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.300600113Z" level=info msg="CreateContainer within sandbox \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.389478760Z" level=info msg="CreateContainer within sandbox \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\""
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.397162604Z" level=info msg="StartContainer for \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\""
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.594046909Z" level=info msg="StartContainer for \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\" returns successfully"
	Aug 16 22:25:11 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:11.852957632Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:4f138dc7-da0e-4775-b4de-b0f7d616b212,Namespace:kube-system,Attempt:0,}"
	Aug 16 22:25:11 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:11.903771908Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c pid=5174
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.439549893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:4f138dc7-da0e-4775-b4de-b0f7d616b212,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\""
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.451930506Z" level=info msg="CreateContainer within sandbox \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.521875733Z" level=info msg="CreateContainer within sandbox \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.523292924Z" level=info msg="StartContainer for \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.851898064Z" level=info msg="StartContainer for \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\" returns successfully"
	Aug 16 22:25:32 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:32.698376142Z" level=info msg="Finish piping stderr of container \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
	Aug 16 22:25:32 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:32.700549219Z" level=info msg="Finish piping stdout of container \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
	Aug 16 22:25:32 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:32.702928257Z" level=info msg="TaskExit event &TaskExit{ContainerID:f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a,ID:f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a,Pid:5251,ExitStatus:255,ExitedAt:2021-08-16 22:25:32.702245647 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:25:32 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:32.834950240Z" level=info msg="shim disconnected" id=f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a
	Aug 16 22:25:32 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:32.835568670Z" level=error msg="copy shim log" error="read /proc/self/fd/118: file already closed"
	
	* 
	* ==> coredns [28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0] <==
	* I0816 22:24:19.170128       1 trace.go:205] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.168) (total time: 30001ms):
	Trace[2019727887]: [30.001909435s] [30.001909435s] END
	E0816 22:24:19.170279       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0816 22:24:19.171047       1 trace.go:205] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.166) (total time: 30004ms):
	Trace[939984059]: [30.004733433s] [30.004733433s] END
	E0816 22:24:19.171149       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0816 22:24:19.171258       1 trace.go:205] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.166) (total time: 30004ms):
	Trace[911902081]: [30.004945736s] [30.004945736s] END
	E0816 22:24:19.171265       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7ae91e86dd75dee9ae501cb58003198b
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210816222224-6986
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20210816222224-6986
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=pause-20210816222224-6986
	                    minikube.k8s.io/updated_at=2021_08_16T22_23_26_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Aug 2021 22:23:23 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210816222224-6986
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Aug 2021 22:25:19 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 16 Aug 2021 22:24:59 +0000   Mon, 16 Aug 2021 22:26:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 16 Aug 2021 22:24:59 +0000   Mon, 16 Aug 2021 22:26:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 16 Aug 2021 22:24:59 +0000   Mon, 16 Aug 2021 22:26:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 16 Aug 2021 22:24:59 +0000   Mon, 16 Aug 2021 22:26:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.50.226
	  Hostname:    pause-20210816222224-6986
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2033044Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2033044Ki
	  pods:               110
	System Info:
	  Machine ID:                 940ad300f94c41e2a0b0cde81be11541
	  System UUID:                940ad300-f94c-41e2-a0b0-cde81be11541
	  Boot ID:                    ea001a4b-e783-4f93-b7d3-bb910eb45d3c
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-gkxhz                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m31s
	  kube-system                 etcd-pause-20210816222224-6986                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m53s
	  kube-system                 kube-apiserver-pause-20210816222224-6986             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m51s
	  kube-system                 kube-controller-manager-pause-20210816222224-6986    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 kube-proxy-7l59t                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m31s
	  kube-system                 kube-scheduler-pause-20210816222224-6986             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m45s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  3m5s (x6 over 3m6s)  kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m5s (x5 over 3m6s)  kubelet     Node pause-20210816222224-6986 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m5s (x5 over 3m6s)  kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m45s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m45s                kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m45s                kubelet     Node pause-20210816222224-6986 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m45s                kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m45s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m35s                kubelet     Node pause-20210816222224-6986 status is now: NodeReady
	  Normal  Starting                 2m28s                kube-proxy  Starting kube-proxy.
	  Normal  Starting                 86s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  86s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  85s (x8 over 86s)    kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    85s (x8 over 86s)    kubelet     Node pause-20210816222224-6986 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     85s (x7 over 86s)    kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientPID
	  Normal  Starting                 77s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.006251] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.889854] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[ +16.286436] systemd-fstab-generator[2098]: Ignoring "noauto" for root device
	[  +0.258185] systemd-fstab-generator[2128]: Ignoring "noauto" for root device
	[  +0.135377] systemd-fstab-generator[2143]: Ignoring "noauto" for root device
	[  +0.180446] systemd-fstab-generator[2173]: Ignoring "noauto" for root device
	[Aug16 22:23] systemd-fstab-generator[2381]: Ignoring "noauto" for root device
	[ +20.504547] systemd-fstab-generator[2808]: Ignoring "noauto" for root device
	[ +20.717915] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.551219] kauditd_printk_skb: 104 callbacks suppressed
	[Aug16 22:24] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.792051] systemd-fstab-generator[3754]: Ignoring "noauto" for root device
	[  +0.176916] systemd-fstab-generator[3767]: Ignoring "noauto" for root device
	[  +0.230657] systemd-fstab-generator[3792]: Ignoring "noauto" for root device
	[  +4.083098] kauditd_printk_skb: 2 callbacks suppressed
	[  +3.840195] NFSD: Unable to end grace period: -110
	[  +4.324119] systemd-fstab-generator[4543]: Ignoring "noauto" for root device
	[  +6.680726] kauditd_printk_skb: 29 callbacks suppressed
	[Aug16 22:25] kauditd_printk_skb: 14 callbacks suppressed
	[ +12.641213] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.313576] systemd-fstab-generator[5666]: Ignoring "noauto" for root device
	[  +0.847782] systemd-fstab-generator[5723]: Ignoring "noauto" for root device
	[  +1.051927] systemd-fstab-generator[5775]: Ignoring "noauto" for root device
	[Aug16 22:26] systemd-fstab-generator[6446]: Ignoring "noauto" for root device
	[  +0.762421] systemd-fstab-generator[6474]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [124fa393359f758ea47161b345d2cab4ce486d4473a4caad483449464d44315f] <==
	* 2021-08-16 22:23:41.064197 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210816222224-6986\" " with result "range_response_count:1 size:5052" took too long (6.421187445s) to execute
	2021-08-16 22:23:41.065847 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:341" took too long (6.446897155s) to execute
	2021-08-16 22:23:41.066285 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/node-controller\" " with result "range_response_count:1 size:242" took too long (5.09674902s) to execute
	2021-08-16 22:23:41.068005 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (6.28196539s) to execute
	2021-08-16 22:23:41.068259 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (763.710719ms) to execute
	2021-08-16 22:23:41.880435 W | etcdserver: read-only range request "key:\"/registry/masterleases/192.168.50.226\" " with result "range_response_count:0 size:5" took too long (776.335267ms) to execute
	2021-08-16 22:23:41.881080 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (597.366064ms) to execute
	2021-08-16 22:23:41.882354 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:4569" took too long (763.841142ms) to execute
	2021-08-16 22:23:41.883287 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (621.677263ms) to execute
	2021-08-16 22:23:41.884722 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (481.499599ms) to execute
	2021-08-16 22:23:41.885189 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-pause-20210816222224-6986\" " with result "range_response_count:1 size:5421" took too long (772.180278ms) to execute
	2021-08-16 22:23:42.453217 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (290.061418ms) to execute
	2021-08-16 22:23:42.455427 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (285.893643ms) to execute
	2021-08-16 22:23:42.456943 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-pause-20210816222224-6986\" " with result "range_response_count:1 size:6314" took too long (153.946258ms) to execute
	2021-08-16 22:23:42.458024 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (177.825431ms) to execute
	2021-08-16 22:23:44.267832 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:23:54.092150 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (701.802797ms) to execute
	2021-08-16 22:23:54.093518 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-ctgf5\" " with result "range_response_count:1 size:4473" took too long (1.090386256s) to execute
	2021-08-16 22:23:54.267392 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:23:57.768234 W | etcdserver: request "header:<ID:4263355585347158035 > lease_revoke:<id:3b2a7b510fcb7e67>" with result "size:29" took too long (771.90226ms) to execute
	2021-08-16 22:23:57.768903 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (374.444829ms) to execute
	2021-08-16 22:23:57.769379 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-ctgf5\" " with result "range_response_count:1 size:4473" took too long (765.115046ms) to execute
	2021-08-16 22:24:04.267548 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:24:14.267958 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:24:24.268321 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> etcd [3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423] <==
	* 
	* ==> etcd [69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549] <==
	* 2021-08-16 22:24:53.773065 W | auth: simple token is not cryptographically signed
	2021-08-16 22:24:53.837118 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	raft2021/08/16 22:24:53 INFO: e840193bf29c3b2a switched to configuration voters=(16735403960572853034)
	2021-08-16 22:24:53.849298 I | etcdserver/membership: added member e840193bf29c3b2a [https://192.168.50.226:2380] to cluster 99b90e1bea73c730
	2021-08-16 22:24:53.860198 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-16 22:24:53.864997 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-16 22:24:53.865214 I | embed: listening for peers on 192.168.50.226:2380
	2021-08-16 22:24:53.868083 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-16 22:24:53.871735 I | etcdserver/api: enabled capabilities for version 3.4
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a is starting a new election at term 2
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a became candidate at term 3
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a received MsgVoteResp from e840193bf29c3b2a at term 3
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a became leader at term 3
	raft2021/08/16 22:24:54 INFO: raft.node: e840193bf29c3b2a elected leader e840193bf29c3b2a at term 3
	2021-08-16 22:24:54.968820 I | embed: ready to serve client requests
	2021-08-16 22:24:54.969394 I | etcdserver: published {Name:pause-20210816222224-6986 ClientURLs:[https://192.168.50.226:2379]} to cluster 99b90e1bea73c730
	2021-08-16 22:24:54.971284 I | embed: serving client requests on 192.168.50.226:2379
	2021-08-16 22:24:54.971462 I | embed: ready to serve client requests
	2021-08-16 22:24:54.973508 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-16 22:25:03.067902 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-gkxhz\" " with result "range_response_count:1 size:4860" took too long (140.807991ms) to execute
	2021-08-16 22:25:06.747736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:25:08.138740 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:25:10.645123 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:3838" took too long (108.124514ms) to execute
	2021-08-16 22:25:10.645989 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:665" took too long (107.967343ms) to execute
	2021-08-16 22:25:18.137756 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  22:26:18 up 3 min,  0 users,  load average: 1.51, 1.36, 0.59
	Linux pause-20210816222224-6986 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [38dc61b214a9cbd019de4ca9ab52fb6baf728336de6d715df22b027522ad8b20] <==
	* I0816 22:23:41.890272       1 trace.go:205] Trace[914939944]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:23:41.117) (total time: 772ms):
	Trace[914939944]: [772.29448ms] [772.29448ms] END
	I0816 22:23:41.897880       1 trace.go:205] Trace[372773048]: "List" url:/api/v1/nodes,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/system:serviceaccount:kube-system:node-controller,client:192.168.50.226,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:41.117) (total time: 780ms):
	Trace[372773048]: ---"Listing from storage done" 773ms (22:23:00.891)
	Trace[372773048]: [780.024685ms] [780.024685ms] END
	I0816 22:23:41.899245       1 trace.go:205] Trace[189474875]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-20210816222224-6986,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.50.226,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (16-Aug-2021 22:23:41.107) (total time: 791ms):
	Trace[189474875]: ---"About to write a response" 791ms (22:23:00.899)
	Trace[189474875]: [791.769473ms] [791.769473ms] END
	I0816 22:23:41.914143       1 trace.go:205] Trace[1803257945]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (16-Aug-2021 22:23:41.101) (total time: 812ms):
	Trace[1803257945]: ---"initial value restored" 795ms (22:23:00.897)
	Trace[1803257945]: [812.099383ms] [812.099383ms] END
	I0816 22:23:46.219827       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0816 22:23:46.322056       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0816 22:23:54.101003       1 trace.go:205] Trace[1429856954]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-ctgf5,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.50.1,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:53.002) (total time: 1098ms):
	Trace[1429856954]: ---"About to write a response" 1098ms (22:23:00.100)
	Trace[1429856954]: [1.0988209s] [1.0988209s] END
	I0816 22:23:56.194218       1 client.go:360] parsed scheme: "passthrough"
	I0816 22:23:56.194943       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 22:23:56.195388       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0816 22:23:57.770900       1 trace.go:205] Trace[2103117378]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-ctgf5,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.50.1,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:57.002) (total time: 767ms):
	Trace[2103117378]: ---"About to write a response" 767ms (22:23:00.770)
	Trace[2103117378]: [767.944134ms] [767.944134ms] END
	I0816 22:24:32.818404       1 client.go:360] parsed scheme: "passthrough"
	I0816 22:24:32.818597       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 22:24:32.818691       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-apiserver [53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf] <==
	* I0816 22:25:01.108795       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0816 22:25:01.182177       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0816 22:25:01.279321       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0816 22:25:01.344553       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 22:25:01.382891       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 22:25:11.471022       1 controller.go:611] quota admission added evaluator for: endpoints
	I0816 22:25:13.002505       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 22:26:09.708590       1 client.go:360] parsed scheme: "passthrough"
	I0816 22:26:09.708813       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 22:26:09.708836       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0816 22:26:10.250820       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, context canceled]"
	E0816 22:26:10.251413       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:26:10.251945       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:26:10.252962       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0816 22:26:10.253318       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, context canceled]"
	E0816 22:26:10.253689       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:26:10.255306       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:26:10.287601       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0816 22:26:10.923951       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:26:10.924099       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:26:10.926309       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0816 22:26:10.928101       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0816 22:26:10.928591       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:26:10.929574       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:26:10.930759       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	
	* 
	* ==> kube-apiserver [7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612] <==
	* 
	* ==> kube-controller-manager [825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7] <==
	* W0816 22:26:05.643314       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolumeClaim ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643340       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643359       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PriorityClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643375       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.Ingress ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643397       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643403       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PodDisruptionBudget ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643441       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.RoleBinding ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643454       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.PodSecurityPolicy ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643483       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643499       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.NetworkPolicy ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643529       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ValidatingWebhookConfiguration ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643538       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.CSIStorageCapacity ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643571       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Endpoints ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643579       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ReplicationController ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643697       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ClusterRole ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643952       1 reflector.go:436] k8s.io/client-go/metadata/metadatainformer/informer.go:90: watch of *v1.PartialObjectMetadata ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	I0816 22:26:11.004517       1 event.go:291] "Event occurred" object="pause-20210816222224-6986" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node pause-20210816222224-6986 status is now: NodeNotReady"
	I0816 22:26:11.080243       1 event.go:291] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:26:11.118447       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db-gkxhz" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:26:11.168905       1 event.go:291] "Event occurred" object="kube-system/etcd-pause-20210816222224-6986" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:26:11.220112       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-pause-20210816222224-6986" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:26:11.265527       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-pause-20210816222224-6986" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:26:11.294104       1 event.go:291] "Event occurred" object="kube-system/kube-proxy-7l59t" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:26:11.318132       1 node_lifecycle_controller.go:1164] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0816 22:26:11.318953       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-pause-20210816222224-6986" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	* 
	* ==> kube-controller-manager [8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4] <==
	* 
	* ==> kube-proxy [2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2] <==
	* I0816 22:25:00.641886       1 node.go:172] Successfully retrieved node IP: 192.168.50.226
	I0816 22:25:00.641938       1 server_others.go:140] Detected node IP 192.168.50.226
	W0816 22:25:00.642012       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0816 22:25:00.805515       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0816 22:25:00.805539       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0816 22:25:00.805560       1 server_others.go:212] Using iptables Proxier.
	I0816 22:25:00.806059       1 server.go:643] Version: v1.21.3
	I0816 22:25:00.807251       1 config.go:315] Starting service config controller
	I0816 22:25:00.807281       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:25:00.807307       1 config.go:224] Starting endpoint slice config controller
	I0816 22:25:00.807313       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:25:00.812511       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 22:25:00.816722       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:25:00.907844       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 22:25:00.907906       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d] <==
	* 
	* ==> kube-proxy [a8503bd796d5d979a6e1b8b5154986e8b77de391b4f091211451ea5f52808e52] <==
	* I0816 22:23:49.316430       1 node.go:172] Successfully retrieved node IP: 192.168.50.226
	I0816 22:23:49.316608       1 server_others.go:140] Detected node IP 192.168.50.226
	W0816 22:23:49.316822       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0816 22:23:49.402698       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0816 22:23:49.403462       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0816 22:23:49.404047       1 server_others.go:212] Using iptables Proxier.
	I0816 22:23:49.407950       1 server.go:643] Version: v1.21.3
	I0816 22:23:49.410864       1 config.go:315] Starting service config controller
	I0816 22:23:49.413112       1 config.go:224] Starting endpoint slice config controller
	I0816 22:23:49.419474       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:23:49.421254       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:23:49.413718       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:23:49.425958       1 shared_informer.go:247] Caches are synced for service config 
	W0816 22:23:49.425586       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:23:49.520425       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1] <==
	* E0816 22:26:09.162177       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.50.226:8443/api/v1/persistentvolumes?resourceVersion=484": net/http: TLS handshake timeout
	I0816 22:26:09.270505       1 trace.go:205] Trace[979895767]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:25:59.268) (total time: 10002ms):
	Trace[979895767]: [10.002171145s] [10.002171145s] END
	E0816 22:26:09.270735       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.50.226:8443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=567": net/http: TLS handshake timeout
	I0816 22:26:09.285882       1 trace.go:205] Trace[1750384971]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:25:59.283) (total time: 10002ms):
	Trace[1750384971]: [10.002067459s] [10.002067459s] END
	E0816 22:26:09.285914       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.50.226:8443/api/v1/nodes?resourceVersion=491": net/http: TLS handshake timeout
	I0816 22:26:09.299098       1 trace.go:205] Trace[1557489506]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:25:59.297) (total time: 10001ms):
	Trace[1557489506]: [10.001361328s] [10.001361328s] END
	E0816 22:26:09.299227       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.50.226:8443/api/v1/replicationcontrollers?resourceVersion=484": net/http: TLS handshake timeout
	I0816 22:26:09.446582       1 trace.go:205] Trace[1473987170]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:25:59.445) (total time: 10000ms):
	Trace[1473987170]: [10.000963367s] [10.000963367s] END
	E0816 22:26:09.446603       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.50.226:8443/apis/storage.k8s.io/v1/csinodes?resourceVersion=484": net/http: TLS handshake timeout
	I0816 22:26:09.496925       1 trace.go:205] Trace[1868762133]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:25:59.495) (total time: 10001ms):
	Trace[1868762133]: [10.001820772s] [10.001820772s] END
	E0816 22:26:09.496954       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.50.226:8443/api/v1/persistentvolumeclaims?resourceVersion=484": net/http: TLS handshake timeout
	I0816 22:26:09.612135       1 trace.go:205] Trace[1357747237]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:25:59.610) (total time: 10002ms):
	Trace[1357747237]: [10.002067456s] [10.002067456s] END
	E0816 22:26:09.612165       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.50.226:8443/api/v1/services?resourceVersion=484": net/http: TLS handshake timeout
	I0816 22:26:09.654977       1 trace.go:205] Trace[1687302369]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:25:59.653) (total time: 10001ms):
	Trace[1687302369]: [10.00115526s] [10.00115526s] END
	E0816 22:26:09.655006       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.50.226:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&resourceVersion=581": net/http: TLS handshake timeout
	I0816 22:26:09.756127       1 trace.go:205] Trace[2132808872]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:25:59.754) (total time: 10001ms):
	Trace[2132808872]: [10.001340189s] [10.001340189s] END
	E0816 22:26:09.756154       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.50.226:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=484": net/http: TLS handshake timeout
	
	* 
	* ==> kube-scheduler [8710cefecdbe5d31cd44e9ae3378bc08cbc56001326a1cb38026755196cac7d1] <==
	* E0816 22:23:21.172468       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:23:21.189536       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:23:21.300836       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.329219       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.448607       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:23:21.504104       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.504531       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:23:21.597849       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:23:21.612843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:23:21.671333       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:23:21.827198       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:23:21.852843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:23:21.867015       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.910139       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:23:23.291774       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.356078       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.452841       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:23:23.464942       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:23:23.644764       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:23:23.649142       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.710606       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.980099       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:23:24.052112       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:23:24.168543       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0816 22:23:30.043826       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a] <==
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:22:35 UTC, end at Mon 2021-08-16 22:26:18 UTC. --
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.626851    6454 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.628485    6454 container_manager_linux.go:278] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.628976    6454 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.629246    6454 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.629435    6454 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.629715    6454 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.630062    6454 remote_runtime.go:62] parsed scheme: ""
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.630231    6454 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.630443    6454 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.630603    6454 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.632294    6454 remote_image.go:50] parsed scheme: ""
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.632534    6454 remote_image.go:50] scheme "" not registered, fallback to default scheme
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.632564    6454 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.632583    6454 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.632816    6454 kubelet.go:404] "Attempting to sync node with API server"
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.632844    6454 kubelet.go:272] "Adding static pod path" path="/etc/kubernetes/manifests"
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.632889    6454 kubelet.go:283] "Adding apiserver pod source"
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.632922    6454 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.637422    6454 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="containerd" version="v1.4.9" apiVersion="v1alpha2"
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.679320    6454 apiserver.go:52] "Watching apiserver"
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: E0816 22:26:15.915559    6454 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.916399    6454 server.go:1190] "Started kubelet"
	Aug 16 22:26:15 pause-20210816222224-6986 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:26:15 pause-20210816222224-6986 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 80 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc00032b490, 0x0)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc00032b480)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0003e1260, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0003bd400, 0x18e5530, 0xc00032a600, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0001b7d20)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0001b7d20, 0x18b3d60, 0xc0001bb8c0, 0xc00038ff01, 0xc000136600)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0001b7d20, 0x3b9aca00, 0x0, 0x48ef01, 0xc000136600)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0001b7d20, 0x3b9aca00, 0xc000136600)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:26:18.174388   12086 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:26:18Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log\": lstat /var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:26:18Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log\\\": lstat /var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:26:18.368952   12086 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:26:18Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log\": lstat /var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:26:18Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log\\\": lstat /var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:26:18.456915   12086 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:26:18Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log\": lstat /var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:26:18Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log\\\": lstat /var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:26:18.540627   12086 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:26:18Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log\": lstat /var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:26:18Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log\\\": lstat /var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:26:18.736545   12086 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:26:18Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log\": lstat /var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:26:18Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log\\\": lstat /var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log: no such file or directory\"\n\n** /stderr **"
	! unable to fetch logs for: etcd [3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423], kube-apiserver [7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612], kube-controller-manager [8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4], kube-proxy [9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d], kube-scheduler [97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a]

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816222224-6986 -n pause-20210816222224-6986
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210816222224-6986 -n pause-20210816222224-6986: exit status 2 (287.127456ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestPause/serial/PauseAgain FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestPause/serial/PauseAgain]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p pause-20210816222224-6986 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p pause-20210816222224-6986 logs -n 25: exit status 110 (1.651239205s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                  Args                  |                Profile                 |  User   | Version |          Start Time           |           End Time            |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p                                     | multinode-20210816215441-6986          | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:11:11 UTC | Mon, 16 Aug 2021 22:15:19 UTC |
	|         | multinode-20210816215441-6986          |                                        |         |         |                               |                               |
	|         | --wait=true -v=8                       |                                        |         |         |                               |                               |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | --driver=kvm2                          |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| start   | -p                                     | multinode-20210816215441-6986-m03      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:15:19 UTC | Mon, 16 Aug 2021 22:16:20 UTC |
	|         | multinode-20210816215441-6986-m03      |                                        |         |         |                               |                               |
	|         | --driver=kvm2                          |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| delete  | -p                                     | multinode-20210816215441-6986-m03      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:16:20 UTC | Mon, 16 Aug 2021 22:16:21 UTC |
	|         | multinode-20210816215441-6986-m03      |                                        |         |         |                               |                               |
	| delete  | -p                                     | multinode-20210816215441-6986          | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:16:21 UTC | Mon, 16 Aug 2021 22:16:23 UTC |
	|         | multinode-20210816215441-6986          |                                        |         |         |                               |                               |
	| start   | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:18:07 UTC | Mon, 16 Aug 2021 22:19:45 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | --wait=true --preload=false            |                                        |         |         |                               |                               |
	|         | --driver=kvm2                          |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.0           |                                        |         |         |                               |                               |
	| ssh     | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:45 UTC | Mon, 16 Aug 2021 22:19:47 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | -- sudo crictl pull busybox            |                                        |         |         |                               |                               |
	| start   | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:19:48 UTC | Mon, 16 Aug 2021 22:20:39 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | -v=1 --wait=true --driver=kvm2         |                                        |         |         |                               |                               |
	|         |  --container-runtime=containerd        |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.17.3           |                                        |         |         |                               |                               |
	| ssh     | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:39 UTC | Mon, 16 Aug 2021 22:20:39 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	|         | -- sudo crictl image ls                |                                        |         |         |                               |                               |
	| delete  | -p                                     | test-preload-20210816221807-6986       | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:39 UTC | Mon, 16 Aug 2021 22:20:40 UTC |
	|         | test-preload-20210816221807-6986       |                                        |         |         |                               |                               |
	| start   | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:20:40 UTC | Mon, 16 Aug 2021 22:21:45 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	|         | --memory=2048 --driver=kvm2            |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| stop    | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:46 UTC | Mon, 16 Aug 2021 22:21:46 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	|         | --cancel-scheduled                     |                                        |         |         |                               |                               |
	| stop    | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:21:58 UTC | Mon, 16 Aug 2021 22:22:05 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	|         | --schedule 5s                          |                                        |         |         |                               |                               |
	| delete  | -p                                     | scheduled-stop-20210816222040-6986     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:23 UTC | Mon, 16 Aug 2021 22:22:24 UTC |
	|         | scheduled-stop-20210816222040-6986     |                                        |         |         |                               |                               |
	| delete  | -p kubenet-20210816222224-6986         | kubenet-20210816222224-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:22:25 UTC |
	| delete  | -p false-20210816222225-6986           | false-20210816222225-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:25 UTC | Mon, 16 Aug 2021 22:22:25 UTC |
	| start   | -p                                     | force-systemd-env-20210816222224-6986  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:24:04 UTC |
	|         | force-systemd-env-20210816222224-6986  |                                        |         |         |                               |                               |
	|         | --memory=2048 --alsologtostderr        |                                        |         |         |                               |                               |
	|         | -v=5 --driver=kvm2                     |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| -p      | force-systemd-env-20210816222224-6986  | force-systemd-env-20210816222224-6986  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:04 UTC | Mon, 16 Aug 2021 22:24:04 UTC |
	|         | ssh cat /etc/containerd/config.toml    |                                        |         |         |                               |                               |
	| delete  | -p                                     | force-systemd-env-20210816222224-6986  | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:04 UTC | Mon, 16 Aug 2021 22:24:05 UTC |
	|         | force-systemd-env-20210816222224-6986  |                                        |         |         |                               |                               |
	| start   | -p pause-20210816222224-6986           | pause-20210816222224-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:24:28 UTC |
	|         | --memory=2048                          |                                        |         |         |                               |                               |
	|         | --install-addons=false                 |                                        |         |         |                               |                               |
	|         | --wait=all --driver=kvm2               |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| start   | -p                                     | kubernetes-upgrade-20210816222225-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:25 UTC | Mon, 16 Aug 2021 22:24:48 UTC |
	|         | kubernetes-upgrade-20210816222225-6986 |                                        |         |         |                               |                               |
	|         | --memory=2200                          |                                        |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0           |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --driver=kvm2   |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| stop    | -p                                     | kubernetes-upgrade-20210816222225-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:49 UTC | Mon, 16 Aug 2021 22:24:53 UTC |
	|         | kubernetes-upgrade-20210816222225-6986 |                                        |         |         |                               |                               |
	| start   | -p                                     | offline-containerd-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:22:24 UTC | Mon, 16 Aug 2021 22:25:02 UTC |
	|         | offline-containerd-20210816222224-6986 |                                        |         |         |                               |                               |
	|         | --alsologtostderr -v=1 --memory=2048   |                                        |         |         |                               |                               |
	|         | --wait=true --driver=kvm2              |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| delete  | -p                                     | offline-containerd-20210816222224-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:25:02 UTC | Mon, 16 Aug 2021 22:25:03 UTC |
	|         | offline-containerd-20210816222224-6986 |                                        |         |         |                               |                               |
	| start   | -p pause-20210816222224-6986           | pause-20210816222224-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:24:28 UTC | Mon, 16 Aug 2021 22:25:13 UTC |
	|         | --alsologtostderr                      |                                        |         |         |                               |                               |
	|         | -v=1 --driver=kvm2                     |                                        |         |         |                               |                               |
	|         | --container-runtime=containerd         |                                        |         |         |                               |                               |
	| unpause | -p pause-20210816222224-6986           | pause-20210816222224-6986              | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:26:09 UTC | Mon, 16 Aug 2021 22:26:10 UTC |
	|         | --alsologtostderr -v=5                 |                                        |         |         |                               |                               |
	|---------|----------------------------------------|----------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:25:29
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:25:29.865623   11635 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:25:29.865693   11635 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:25:29.865697   11635 out.go:311] Setting ErrFile to fd 2...
	I0816 22:25:29.865700   11635 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:25:29.865823   11635 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:25:29.866117   11635 out.go:305] Setting JSON to false
	I0816 22:25:29.903940   11635 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":4092,"bootTime":1629148638,"procs":190,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:25:29.904062   11635 start.go:121] virtualization: kvm guest
	I0816 22:25:29.906708   11635 out.go:177] * [stopped-upgrade-20210816222405-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:25:29.908312   11635 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:29.906879   11635 notify.go:169] Checking for updates...
	I0816 22:25:29.909776   11635 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:25:29.911276   11635 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:25:29.811958   10879 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:25:29.817441   10879 start.go:413] Will wait 60s for crictl version
	I0816 22:25:29.817496   10879 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:25:29.853053   10879 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0816 22:25:29.853115   10879 ssh_runner.go:149] Run: containerd --version
	I0816 22:25:29.886777   10879 ssh_runner.go:149] Run: containerd --version
	I0816 22:25:29.912690   11635 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:25:29.913065   11635 config.go:177] Loaded profile config "stopped-upgrade-20210816222405-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0816 22:25:29.913582   11635 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:29.913647   11635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:29.927429   11635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45201
	I0816 22:25:29.927806   11635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:29.928407   11635 main.go:130] libmachine: Using API Version  1
	I0816 22:25:29.928426   11635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:29.928828   11635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:29.928990   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:29.930681   11635 out.go:177] * Kubernetes 1.21.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.21.3
	I0816 22:25:29.930718   11635 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:25:29.931039   11635 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:29.931072   11635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:29.942479   11635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40149
	I0816 22:25:29.942868   11635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:29.943307   11635 main.go:130] libmachine: Using API Version  1
	I0816 22:25:29.943323   11635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:29.943763   11635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:29.943943   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:29.977758   11635 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 22:25:29.977793   11635 start.go:278] selected driver: kvm2
	I0816 22:25:29.977800   11635 start.go:751] validating driver "kvm2" against &{Name:stopped-upgrade-20210816222405-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.16.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:stopped-upgra
de-20210816222405-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.139 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:29.977911   11635 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 22:25:29.979236   11635 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:29.979388   11635 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 22:25:29.992607   11635 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0816 22:25:29.992893   11635 cni.go:93] Creating CNI manager for ""
	I0816 22:25:29.992910   11635 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:25:29.992918   11635 start_flags.go:277] config:
	{Name:stopped-upgrade-20210816222405-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.16.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.15-snapshot4@sha256:ef1f485b5a1cfa4c989bc05e153f0a8525968ec999e242efff871cbb31649c16 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:stopped-upgrade-20210816222405-6986 Namespace:default APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.139 Port:8443 KubernetesVersion:v1.20.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:29.993026   11635 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:29.994994   11635 out.go:177] * Starting control plane node stopped-upgrade-20210816222405-6986 in cluster stopped-upgrade-20210816222405-6986
	I0816 22:25:29.995015   11635 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	W0816 22:25:30.055078   11635 preload.go:114] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.20.0-containerd-overlay2-amd64.tar.lz4 status code: 404
	I0816 22:25:30.055326   11635 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/stopped-upgrade-20210816222405-6986/config.json ...
	I0816 22:25:30.055447   11635 cache.go:108] acquiring lock: {Name:mk44a899d3e13d1e1a41236ca93bfa4c540d90ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055469   11635 cache.go:108] acquiring lock: {Name:mk54690e8b8165106a936f57493f4a5f28a2f038 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055479   11635 cache.go:108] acquiring lock: {Name:mk5fa6434b4b67f17fb247c2a1febaaba95afc21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055522   11635 cache.go:108] acquiring lock: {Name:mk248d78835e4d0dd7deebdde93e709059900376 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055596   11635 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:25:30.055621   11635 start.go:313] acquiring machines lock for stopped-upgrade-20210816222405-6986: {Name:mk808edd60d1305a42bb85791729eff4573dbb15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 22:25:30.055639   11635 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.20.0
	I0816 22:25:30.055653   11635 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.20.0
	I0816 22:25:30.055661   11635 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.20.0
	I0816 22:25:30.055667   11635 cache.go:108] acquiring lock: {Name:mk74554b1ad079cff2e1d01801c80cf158c3d0db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055449   11635 cache.go:108] acquiring lock: {Name:mk21e396b81c69d7c5a1e31157ecfaad7d142ec3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055784   11635 image.go:133] retrieving image: k8s.gcr.io/coredns:1.7.0
	I0816 22:25:30.055784   11635 cache.go:108] acquiring lock: {Name:mk4f799cc9e7aed39d4f75ef9ab783b3653bcaec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055644   11635 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 exists
	I0816 22:25:30.055857   11635 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.20.0
	I0816 22:25:30.055838   11635 cache.go:97] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.4" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4" took 404.352µs
	I0816 22:25:30.055869   11635 cache.go:81] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.4 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 succeeded
	I0816 22:25:30.055492   11635 cache.go:108] acquiring lock: {Name:mk35426dc160f9577622987ee511aee9c6194a42 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055863   11635 cache.go:108] acquiring lock: {Name:mk7034d6c48699d6f6387acd67a2f9aff6580cde Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055899   11635 cache.go:108] acquiring lock: {Name:mk83079d57b39c409d978da9d87f61040fa2879d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:25:30.055991   11635 image.go:133] retrieving image: k8s.gcr.io/pause:3.2
	I0816 22:25:30.055993   11635 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0816 22:25:30.056005   11635 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 exists
	I0816 22:25:30.056013   11635 cache.go:97] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 117.345µs
	I0816 22:25:30.056023   11635 cache.go:97] cache image "docker.io/kubernetesui/dashboard:v2.1.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0" took 216.257µs
	I0816 22:25:30.056038   11635 cache.go:81] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0816 22:25:30.056041   11635 cache.go:81] save to tar file docker.io/kubernetesui/dashboard:v2.1.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 succeeded
	I0816 22:25:30.056058   11635 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.13-0
	I0816 22:25:30.069442   11635 image.go:171] found k8s.gcr.io/pause:3.2 locally: &{UncompressedImageCore:0xc0015120b8 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:30.069470   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2
	I0816 22:25:30.125132   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2 exists
	I0816 22:25:30.125180   11635 cache.go:97] cache image "k8s.gcr.io/pause:3.2" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2" took 69.39782ms
	I0816 22:25:30.125195   11635 cache.go:81] save to tar file k8s.gcr.io/pause:3.2 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2 succeeded
	I0816 22:25:30.603136   11635 image.go:171] found k8s.gcr.io/coredns:1.7.0 locally: &{UncompressedImageCore:0xc0001b80d0 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:30.603171   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0
	I0816 22:25:30.944848   11635 start.go:317] acquired machines lock for "stopped-upgrade-20210816222405-6986" in 889.202815ms
	I0816 22:25:30.944895   11635 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:25:30.944913   11635 fix.go:55] fixHost starting: 
	I0816 22:25:30.945363   11635 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:25:30.945411   11635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:25:30.958997   11635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43851
	I0816 22:25:30.959558   11635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:25:30.960174   11635 main.go:130] libmachine: Using API Version  1
	I0816 22:25:30.960191   11635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:25:30.960618   11635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:25:30.961025   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:30.961188   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetState
	I0816 22:25:30.964772   11635 fix.go:108] recreateIfNeeded on stopped-upgrade-20210816222405-6986: state=Stopped err=<nil>
	I0816 22:25:30.964804   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	W0816 22:25:30.964947   11635 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:25:29.920921   10879 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on containerd 1.4.9 ...
	I0816 22:25:29.920959   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) Calling .GetIP
	I0816 22:25:29.926612   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:29.927031   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:67:21", ip: ""} in network mk-kubernetes-upgrade-20210816222225-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:25:06 +0000 UTC Type:0 Mac:52:54:00:92:67:21 Iaid: IPaddr:192.168.116.91 Prefix:24 Hostname:kubernetes-upgrade-20210816222225-6986 Clientid:01:52:54:00:92:67:21}
	I0816 22:25:29.927059   10879 main.go:130] libmachine: (kubernetes-upgrade-20210816222225-6986) DBG | domain kubernetes-upgrade-20210816222225-6986 has defined IP address 192.168.116.91 and MAC address 52:54:00:92:67:21 in network mk-kubernetes-upgrade-20210816222225-6986
	I0816 22:25:29.927194   10879 ssh_runner.go:149] Run: grep 192.168.116.1	host.minikube.internal$ /etc/hosts
	I0816 22:25:29.931935   10879 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.116.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:29.945360   10879 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 22:25:29.945419   10879 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:29.980071   10879 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:25:29.980092   10879 containerd.go:517] Images already preloaded, skipping extraction
	I0816 22:25:29.980136   10879 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:30.014510   10879 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:25:30.014534   10879 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:25:30.014596   10879 ssh_runner.go:149] Run: sudo crictl info
	I0816 22:25:30.049631   10879 cni.go:93] Creating CNI manager for ""
	I0816 22:25:30.049653   10879 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:25:30.049662   10879 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:25:30.049673   10879 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.116.91 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-20210816222225-6986 NodeName:kubernetes-upgrade-20210816222225-6986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.116.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.116.91 Cgro
upDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:25:30.049804   10879 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.116.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "kubernetes-upgrade-20210816222225-6986"
	  kubeletExtraArgs:
	    node-ip: 192.168.116.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.116.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:25:30.049880   10879 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubernetes-upgrade-20210816222225-6986 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.116.91 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210816222225-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:25:30.049928   10879 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0816 22:25:30.057977   10879 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:25:30.058039   10879 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:25:30.069033   10879 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (559 bytes)
	I0816 22:25:30.084505   10879 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0816 22:25:30.099259   10879 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0816 22:25:30.117789   10879 ssh_runner.go:149] Run: grep 192.168.116.91	control-plane.minikube.internal$ /etc/hosts
	I0816 22:25:30.123120   10879 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.116.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:30.137790   10879 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986 for IP: 192.168.116.91
	I0816 22:25:30.137839   10879 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:25:30.137860   10879 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:25:30.137924   10879 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/client.key
	I0816 22:25:30.137959   10879 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/apiserver.key.0bcaee26
	I0816 22:25:30.137982   10879 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/proxy-client.key
	I0816 22:25:30.138107   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem (1338 bytes)
	W0816 22:25:30.138164   10879 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
	I0816 22:25:30.138180   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 22:25:30.138217   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:25:30.138260   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:25:30.138286   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1675 bytes)
	I0816 22:25:30.138335   10879 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:25:30.139672   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:25:30.162501   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 22:25:30.187988   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:25:30.208990   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 22:25:30.232139   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:25:30.260638   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 22:25:30.287352   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:25:30.316722   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 22:25:30.346450   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
	I0816 22:25:30.369471   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
	I0816 22:25:30.397914   10879 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:25:30.433581   10879 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:25:30.454657   10879 ssh_runner.go:149] Run: openssl version
	I0816 22:25:30.463107   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
	I0816 22:25:30.475906   10879 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6986.pem
	I0816 22:25:30.482988   10879 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:49 /usr/share/ca-certificates/6986.pem
	I0816 22:25:30.483059   10879 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
	I0816 22:25:30.492438   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
	I0816 22:25:30.507780   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
	I0816 22:25:30.522931   10879 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/69862.pem
	I0816 22:25:30.529692   10879 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:49 /usr/share/ca-certificates/69862.pem
	I0816 22:25:30.529753   10879 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
	I0816 22:25:30.540308   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:25:30.555395   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:25:30.571425   10879 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:30.577689   10879 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:42 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:30.577743   10879 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:25:30.588310   10879 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:25:30.599536   10879 kubeadm.go:390] StartCluster: {Name:kubernetes-upgrade-20210816222225-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.22.0-rc.0 ClusterName:kubernetes-upgrade-20210816222225-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.116.91 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:25:30.599646   10879 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0816 22:25:30.599708   10879 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:25:30.645578   10879 cri.go:76] found id: ""
	I0816 22:25:30.645662   10879 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:25:30.656753   10879 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:25:30.656775   10879 kubeadm.go:600] restartCluster start
	I0816 22:25:30.656823   10879 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:25:30.665356   10879 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:25:30.666456   10879 kubeconfig.go:117] verify returned: extract IP: "kubernetes-upgrade-20210816222225-6986" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:25:30.666789   10879 kubeconfig.go:128] "kubernetes-upgrade-20210816222225-6986" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:25:30.667454   10879 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:25:30.668301   10879 kapi.go:59] client config for kubernetes-upgrade-20210816222225-6986: &rest.Config{Host:"https://192.168.116.91:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kubernetes-upgrade-20210816222225-6986/client.crt", KeyFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/k
ubernetes-upgrade-20210816222225-6986/client.key", CAFile:"/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x17e3460), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0816 22:25:30.670154   10879 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:25:30.680113   10879 kubeadm.go:568] needs reconfigure: configs differ:
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta2
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.116.91
	@@ -17,7 +17,7 @@
	     node-ip: 192.168.116.91
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta1
	+apiVersion: kubeadm.k8s.io/v1beta2
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.116.91"]
	@@ -31,7 +31,7 @@
	   extraArgs:
	     leader-elect: "false"
	 certificatesDir: /var/lib/minikube/certs
	-clusterName: kubernetes-upgrade-20210816222225-6986
	+clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 dns:
	   type: CoreDNS
	@@ -39,8 +39,8 @@
	   local:
	     dataDir: /var/lib/minikube/etcd
	     extraArgs:
	-      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.116.91:2381
	-kubernetesVersion: v1.14.0
	+      proxy-refresh-interval: "70000"
	+kubernetesVersion: v1.22.0-rc.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I0816 22:25:30.680129   10879 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:25:30.680144   10879 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:25:30.680191   10879 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:25:30.726916   10879 cri.go:76] found id: ""
	I0816 22:25:30.726997   10879 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:25:30.746591   10879 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:25:30.758779   10879 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:25:30.758838   10879 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:25:30.769229   10879 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:25:30.769260   10879 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:30.999868   10879 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:32.699195   10879 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.699296673s)
	I0816 22:25:32.699238   10879 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:33.156614   10879 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:33.351483   10879 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:25:33.492135   10879 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:25:33.492214   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:34.016071   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:30.969443   11635 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-20210816222405-6986" ...
	I0816 22:25:30.969474   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .Start
	I0816 22:25:30.969661   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Ensuring networks are active...
	I0816 22:25:30.972266   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Ensuring network default is active
	I0816 22:25:30.972626   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Ensuring network minikube-net is active
	I0816 22:25:30.973378   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Getting domain xml...
	I0816 22:25:30.975969   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Creating domain...
	I0816 22:25:31.337824   11635 image.go:171] found k8s.gcr.io/kube-scheduler:v1.20.0 locally: &{UncompressedImageCore:0xc0005f0060 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:31.337868   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0
	I0816 22:25:31.505137   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Waiting to get IP...
	I0816 22:25:31.505762   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:31.507077   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has current primary IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:31.507133   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Found IP for machine: 192.168.94.139
	I0816 22:25:31.507148   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Reserving static IP address...
	I0816 22:25:31.507652   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "stopped-upgrade-20210816222405-6986", mac: "52:54:00:48:08:ec", ip: "192.168.94.139"} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:24:28 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:31.507684   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-20210816222405-6986", mac: "52:54:00:48:08:ec", ip: "192.168.94.139"}
	I0816 22:25:31.507704   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | Getting to WaitForSSH function...
	I0816 22:25:31.507722   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Reserved static IP address: 192.168.94.139
	I0816 22:25:31.507732   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Waiting for SSH to be available...
	I0816 22:25:31.513971   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:31.514423   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:24:28 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:31.514456   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:31.514613   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | Using SSH client type: external
	I0816 22:25:31.514740   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816222405-6986/id_rsa (-rw-------)
	I0816 22:25:31.514792   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.94.139 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816222405-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:25:31.514855   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | About to run SSH command:
	I0816 22:25:31.514871   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | exit 0
	I0816 22:25:32.653039   11635 image.go:171] found k8s.gcr.io/kube-controller-manager:v1.20.0 locally: &{UncompressedImageCore:0xc000114030 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:32.653105   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0
	I0816 22:25:32.791720   11635 image.go:171] found k8s.gcr.io/kube-apiserver:v1.20.0 locally: &{UncompressedImageCore:0xc001512008 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:32.791795   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0
	I0816 22:25:34.107008   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 exists
	I0816 22:25:34.107061   11635 cache.go:97] cache image "k8s.gcr.io/coredns:1.7.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0" took 4.051394985s
	I0816 22:25:34.107085   11635 cache.go:81] save to tar file k8s.gcr.io/coredns:1.7.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 succeeded
	I0816 22:25:34.315350   11635 image.go:171] found k8s.gcr.io/etcd:3.4.13-0 locally: &{UncompressedImageCore:0xc000114050 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:34.315401   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0
	I0816 22:25:34.340626   11635 image.go:171] found k8s.gcr.io/kube-proxy:v1.20.0 locally: &{UncompressedImageCore:0xc000010208 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:34.340673   11635 cache.go:162] opening:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0
	I0816 22:25:34.809104   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 exists
	I0816 22:25:34.809158   11635 cache.go:97] cache image "k8s.gcr.io/kube-scheduler:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0" took 4.753722894s
	I0816 22:25:34.809180   11635 cache.go:81] save to tar file k8s.gcr.io/kube-scheduler:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 succeeded
	I0816 22:25:34.516083   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:35.015857   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:35.519104   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:36.016162   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:36.515658   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:37.015674   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:37.515739   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:38.015673   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:38.515650   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:39.015668   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:39.515980   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:40.015682   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:40.515733   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:41.016134   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:41.516902   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:42.015609   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:42.516119   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:43.016137   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:43.515737   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:44.019076   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:40.829030   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 exists
	I0816 22:25:40.829145   11635 cache.go:97] cache image "k8s.gcr.io/kube-controller-manager:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0" took 10.773666591s
	I0816 22:25:40.829193   11635 cache.go:81] save to tar file k8s.gcr.io/kube-controller-manager:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 succeeded
	I0816 22:25:41.800993   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 exists
	I0816 22:25:41.801047   11635 cache.go:97] cache image "k8s.gcr.io/kube-apiserver:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0" took 11.745601732s
	I0816 22:25:41.801066   11635 cache.go:81] save to tar file k8s.gcr.io/kube-apiserver:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 succeeded
	I0816 22:25:43.868728   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 exists
	I0816 22:25:43.868783   11635 cache.go:97] cache image "k8s.gcr.io/kube-proxy:v1.20.0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0" took 13.813268232s
	I0816 22:25:43.868801   11635 cache.go:81] save to tar file k8s.gcr.io/kube-proxy:v1.20.0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 succeeded
	I0816 22:25:44.515937   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:45.016224   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:45.516460   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:46.016178   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:46.516524   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:47.015786   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:47.516658   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:48.015708   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:48.515976   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:49.016470   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:50.674039   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | SSH cmd err, output: <nil>: 
	I0816 22:25:50.674356   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetConfigRaw
	I0816 22:25:50.674991   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetIP
	I0816 22:25:50.680301   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.680676   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:50.680700   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.680937   11635 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/stopped-upgrade-20210816222405-6986/config.json ...
	I0816 22:25:50.681113   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:50.681286   11635 machine.go:88] provisioning docker machine ...
	I0816 22:25:50.681307   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:50.681460   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetMachineName
	I0816 22:25:50.681591   11635 buildroot.go:166] provisioning hostname "stopped-upgrade-20210816222405-6986"
	I0816 22:25:50.681630   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetMachineName
	I0816 22:25:50.681772   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:50.686529   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.686855   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:50.686891   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.687006   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHPort
	I0816 22:25:50.687142   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:50.687255   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:50.687342   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHUsername
	I0816 22:25:50.687442   11635 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:50.687632   11635 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.94.139 22 <nil> <nil>}
	I0816 22:25:50.687655   11635 main.go:130] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-20210816222405-6986 && echo "stopped-upgrade-20210816222405-6986" | sudo tee /etc/hostname
	I0816 22:25:50.813329   11635 main.go:130] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-20210816222405-6986
	
	I0816 22:25:50.813359   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:50.818466   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.818799   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:50.818835   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.818990   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHPort
	I0816 22:25:50.819183   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:50.819328   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:50.819469   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHUsername
	I0816 22:25:50.819617   11635 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:50.819779   11635 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.94.139 22 <nil> <nil>}
	I0816 22:25:50.819808   11635 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-20210816222405-6986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-20210816222405-6986/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-20210816222405-6986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:25:50.843338   11635 cache.go:157] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 exists
	I0816 22:25:50.843375   11635 cache.go:97] cache image "k8s.gcr.io/etcd:3.4.13-0" -> "/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0" took 20.787884619s
	I0816 22:25:50.843386   11635 cache.go:81] save to tar file k8s.gcr.io/etcd:3.4.13-0 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 succeeded
	I0816 22:25:50.843403   11635 cache.go:88] Successfully saved all images to host disk.
	I0816 22:25:50.936578   11635 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:25:50.936617   11635 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:25:50.936644   11635 buildroot.go:174] setting up certificates
	I0816 22:25:50.936656   11635 provision.go:83] configureAuth start
	I0816 22:25:50.936668   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetMachineName
	I0816 22:25:50.936926   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetIP
	I0816 22:25:50.941824   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.942162   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:50.942193   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.942301   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:50.946653   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.946938   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:50.946977   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:50.947097   11635 provision.go:138] copyHostCerts
	I0816 22:25:50.947147   11635 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:25:50.947156   11635 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:25:50.947203   11635 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:25:50.947271   11635 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:25:50.947280   11635 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:25:50.947296   11635 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
	I0816 22:25:50.947338   11635 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:25:50.947349   11635 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:25:50.947365   11635 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:25:50.947403   11635 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-20210816222405-6986 san=[192.168.94.139 192.168.94.139 localhost 127.0.0.1 minikube stopped-upgrade-20210816222405-6986]
	I0816 22:25:51.032615   11635 provision.go:172] copyRemoteCerts
	I0816 22:25:51.032661   11635 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:25:51.032682   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:51.037604   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.037874   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:51.037905   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.038009   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHPort
	I0816 22:25:51.038148   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:51.038246   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHUsername
	I0816 22:25:51.038339   11635 sshutil.go:53] new ssh client: &{IP:192.168.94.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816222405-6986/id_rsa Username:docker}
	I0816 22:25:51.123210   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:25:51.139587   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1277 bytes)
	I0816 22:25:51.155341   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:25:51.171555   11635 provision.go:86] duration metric: configureAuth took 234.887855ms
	I0816 22:25:51.171579   11635 buildroot.go:189] setting minikube options for container-runtime
	I0816 22:25:51.171722   11635 config.go:177] Loaded profile config "stopped-upgrade-20210816222405-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0816 22:25:51.171746   11635 machine.go:91] provisioned docker machine in 490.44902ms
	I0816 22:25:51.171754   11635 start.go:267] post-start starting for "stopped-upgrade-20210816222405-6986" (driver="kvm2")
	I0816 22:25:51.171760   11635 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:25:51.171782   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:51.172055   11635 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:25:51.172077   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:51.176933   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.177267   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:51.177298   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.177441   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHPort
	I0816 22:25:51.177602   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:51.177724   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHUsername
	I0816 22:25:51.177820   11635 sshutil.go:53] new ssh client: &{IP:192.168.94.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816222405-6986/id_rsa Username:docker}
	I0816 22:25:51.280843   11635 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:25:51.285664   11635 info.go:137] Remote host: Buildroot 2020.02.8
	I0816 22:25:51.285688   11635 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:25:51.285748   11635 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:25:51.285858   11635 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0816 22:25:51.285968   11635 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:25:51.293397   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:25:51.309338   11635 start.go:270] post-start completed in 137.570449ms
	I0816 22:25:51.309377   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:51.309623   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:51.315839   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.316232   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:51.316258   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.316441   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHPort
	I0816 22:25:51.316643   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:51.316819   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:51.316941   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHUsername
	I0816 22:25:51.317135   11635 main.go:130] libmachine: Using SSH client type: native
	I0816 22:25:51.317300   11635 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.94.139 22 <nil> <nil>}
	I0816 22:25:51.317315   11635 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0816 22:25:51.429012   11635 main.go:130] libmachine: SSH cmd err, output: <nil>: 1629152751.360423899
	
	I0816 22:25:51.429036   11635 fix.go:212] guest clock: 1629152751.360423899
	I0816 22:25:51.429046   11635 fix.go:225] Guest: 2021-08-16 22:25:51.360423899 +0000 UTC Remote: 2021-08-16 22:25:51.30960232 +0000 UTC m=+21.494401500 (delta=50.821579ms)
	I0816 22:25:51.429069   11635 fix.go:196] guest clock delta is within tolerance: 50.821579ms
	I0816 22:25:51.429077   11635 fix.go:57] fixHost completed within 20.484163709s
	I0816 22:25:51.429086   11635 start.go:80] releasing machines lock for "stopped-upgrade-20210816222405-6986", held for 20.484204502s
	I0816 22:25:51.429135   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:51.429381   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetIP
	I0816 22:25:51.434740   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.435145   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:51.435177   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.435313   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:51.435469   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:51.435909   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .DriverName
	I0816 22:25:51.436161   11635 ssh_runner.go:149] Run: systemctl --version
	I0816 22:25:51.436188   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:51.436224   11635 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:25:51.436266   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHHostname
	I0816 22:25:51.441919   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.442250   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:51.442273   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.442407   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHPort
	I0816 22:25:51.442580   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:51.442707   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHUsername
	I0816 22:25:51.442844   11635 sshutil.go:53] new ssh client: &{IP:192.168.94.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816222405-6986/id_rsa Username:docker}
	I0816 22:25:51.442940   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.443308   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:51.443347   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:51.443442   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHPort
	I0816 22:25:51.443577   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHKeyPath
	I0816 22:25:51.443703   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetSSHUsername
	I0816 22:25:51.443818   11635 sshutil.go:53] new ssh client: &{IP:192.168.94.139 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/stopped-upgrade-20210816222405-6986/id_rsa Username:docker}
	I0816 22:25:51.580011   11635 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0816 22:25:51.580081   11635 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0816 22:25:51.617346   11635 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0816 22:25:51.629075   11635 docker.go:153] disabling docker service ...
	I0816 22:25:51.629131   11635 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:25:51.640426   11635 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:25:51.649102   11635 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:25:51.808471   11635 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:25:51.947554   11635 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:25:51.958967   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:25:51.978504   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuMiIKICAgIHN0YXRzX2NvbGxlY3RfcGVyaW9kID0gMTAKICAgIGVuYWJsZV90bHNfc3RyZWFtaW5nID0gZmFsc2UKICAgIG1heF9jb250YWluZXJfbG9nX2xpbmVfc2l6ZSA9IDE2Mzg0CgoJW3BsdWdpbnMuImlvLmNvb
nRhaW5lcmQuZ3JwYy52MS5jcmkiXQogICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZF0KICAgICAgICBbcGx1Z2lucy4iaW8uY29udGFpbmVyZC5ncnBjLnYxLmNyaSIuY29udGFpbmVyZC5ydW50aW1lc10KICAgICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmNdCiAgICAgICAgICAgIHJ1bnRpbWVfdHlwZSA9ICJpby5jb250YWluZXJkLnJ1bmMudjIiCiAgICAgICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzLnJ1bmMub3B0aW9uc10KICAgICAgICAgICAgICBTeXN0ZW1kQ2dyb3VwID0gZmFsc2UKCiAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZF0KICAgICAgc25hcHNob3R0ZXIgPSAib3ZlcmxheWZzIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC5kZWZhdWx0X3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmQudW50cnVzdGVkX3dvcmtsb2FkX3J1bnRpbWVdCiAgICAgICAgcnVudGltZV90eXBlID0gIiIKICAgICAgICBydW50aW1lX2VuZ2luZSA9ICIiCiAgICAgICAgcnVudGltZV9yb290ID0gIiIKICAgIFtwbHVnaW5zLmNyaS5jbmldCiAgICAgIGJpbl9kaXIgPSAiL29wdC9jbmkvYmluIgogICAgICBjb25mX2RpciA9ICIvZXRjL2NuaS9uZXQuZCIKI
CAgICAgY29uZl90ZW1wbGF0ZSA9ICIiCiAgICBbcGx1Z2lucy5jcmkucmVnaXN0cnldCiAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzXQogICAgICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeS5taXJyb3JzLiJkb2NrZXIuaW8iXQogICAgICAgICAgZW5kcG9pbnQgPSBbImh0dHBzOi8vcmVnaXN0cnktMS5kb2NrZXIuaW8iXQogICAgICAgIFtwbHVnaW5zLmRpZmYtc2VydmljZV0KICAgIGRlZmF1bHQgPSBbIndhbGtpbmciXQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0816 22:25:51.992652   11635 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:25:51.998569   11635 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:25:51.998615   11635 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:25:52.014222   11635 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:25:52.020197   11635 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:25:52.145893   11635 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:25:52.190011   11635 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0816 22:25:52.190112   11635 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:25:52.195055   11635 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0816 22:25:53.300309   11635 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:25:53.306277   11635 start.go:413] Will wait 60s for crictl version
	I0816 22:25:53.306323   11635 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:25:53.324774   11635 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.3
	RuntimeApiVersion:  v1alpha2
	I0816 22:25:53.324831   11635 ssh_runner.go:149] Run: containerd --version
	I0816 22:25:53.361138   11635 ssh_runner.go:149] Run: containerd --version
	I0816 22:25:49.515642   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:50.015846   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:50.516027   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:51.015729   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:51.516379   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:52.016301   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:52.516625   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:53.015666   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:53.516566   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:54.016518   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:53.402219   11635 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.4.3 ...
	I0816 22:25:53.402261   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) Calling .GetIP
	I0816 22:25:53.407765   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:53.408040   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:48:08:ec", ip: ""} in network minikube-net: {Iface:virbr6 ExpiryTime:2021-08-16 23:25:46 +0000 UTC Type:0 Mac:52:54:00:48:08:ec Iaid: IPaddr:192.168.94.139 Prefix:24 Hostname:stopped-upgrade-20210816222405-6986 Clientid:01:52:54:00:48:08:ec}
	I0816 22:25:53.408078   11635 main.go:130] libmachine: (stopped-upgrade-20210816222405-6986) DBG | domain stopped-upgrade-20210816222405-6986 has defined IP address 192.168.94.139 and MAC address 52:54:00:48:08:ec in network minikube-net
	I0816 22:25:53.408229   11635 ssh_runner.go:149] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0816 22:25:53.412479   11635 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:25:53.424358   11635 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0816 22:25:53.424401   11635 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:25:53.448218   11635 containerd.go:609] couldn't find preloaded image for "gcr.io/k8s-minikube/storage-provisioner:v5". assuming images are not preloaded.
	I0816 22:25:53.448242   11635 cache_images.go:78] LoadImages start: [k8s.gcr.io/kube-apiserver:v1.20.0 k8s.gcr.io/kube-controller-manager:v1.20.0 k8s.gcr.io/kube-scheduler:v1.20.0 k8s.gcr.io/kube-proxy:v1.20.0 k8s.gcr.io/pause:3.2 k8s.gcr.io/etcd:3.4.13-0 k8s.gcr.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5 docker.io/kubernetesui/dashboard:v2.1.0 docker.io/kubernetesui/metrics-scraper:v1.0.4]
	I0816 22:25:53.448286   11635 image.go:133] retrieving image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0816 22:25:53.448315   11635 image.go:133] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:25:53.448336   11635 image.go:133] retrieving image: k8s.gcr.io/kube-apiserver:v1.20.0
	I0816 22:25:53.448339   11635 image.go:133] retrieving image: docker.io/kubernetesui/dashboard:v2.1.0
	I0816 22:25:53.448315   11635 image.go:133] retrieving image: k8s.gcr.io/pause:3.2
	I0816 22:25:53.448407   11635 image.go:133] retrieving image: k8s.gcr.io/etcd:3.4.13-0
	I0816 22:25:53.448419   11635 image.go:133] retrieving image: k8s.gcr.io/kube-proxy:v1.20.0
	I0816 22:25:53.448384   11635 image.go:133] retrieving image: k8s.gcr.io/coredns:1.7.0
	I0816 22:25:53.448483   11635 image.go:133] retrieving image: k8s.gcr.io/kube-scheduler:v1.20.0
	I0816 22:25:53.448495   11635 image.go:133] retrieving image: k8s.gcr.io/kube-controller-manager:v1.20.0
	I0816 22:25:53.461290   11635 image.go:171] found k8s.gcr.io/pause:3.2 locally: &{UncompressedImageCore:0xc000010aa0 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:53.461345   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/pause:3.2 | grep 80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c"
	I0816 22:25:53.970932   11635 cache_images.go:106] "k8s.gcr.io/pause:3.2" needs transfer: "k8s.gcr.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0816 22:25:53.970997   11635 cri.go:205] Removing image: k8s.gcr.io/pause:3.2
	I0816 22:25:53.971051   11635 ssh_runner.go:149] Run: which crictl
	I0816 22:25:53.982673   11635 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/pause:3.2
	I0816 22:25:53.998453   11635 image.go:171] found gcr.io/k8s-minikube/storage-provisioner:v5 locally: &{UncompressedImageCore:0xc000010ac8 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:53.998519   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep gcr.io/k8s-minikube/storage-provisioner:v5 | grep 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"
	I0816 22:25:54.075847   11635 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2
	I0816 22:25:54.075942   11635 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.2
	I0816 22:25:54.205435   11635 image.go:171] found index.docker.io/kubernetesui/metrics-scraper:v1.0.4 locally: &{UncompressedImageCore:0xc0001141d0 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:54.205506   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/metrics-scraper:v1.0.4 | grep 86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4"
	I0816 22:25:54.279275   11635 image.go:171] found k8s.gcr.io/coredns:1.7.0 locally: &{UncompressedImageCore:0xc000114020 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:54.279354   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns:1.7.0 | grep bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16"
	I0816 22:25:54.516398   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:55.015670   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:55.515685   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:56.016261   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:56.516304   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:57.016559   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:57.515664   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:58.016621   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:58.515698   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:59.015770   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:25:54.977194   11635 ssh_runner.go:306] existence check for /var/lib/minikube/images/pause_3.2: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/pause_3.2': No such file or directory
	I0816 22:25:54.977231   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2 --> /var/lib/minikube/images/pause_3.2 (325632 bytes)
	I0816 22:25:54.977302   11635 cache_images.go:106] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0816 22:25:54.977338   11635 cri.go:205] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:25:54.977370   11635 ssh_runner.go:149] Run: which crictl
	I0816 22:25:55.080433   11635 containerd.go:280] Loading image: /var/lib/minikube/images/pause_3.2
	I0816 22:25:55.080518   11635 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/pause_3.2
	I0816 22:25:55.246428   11635 image.go:171] found k8s.gcr.io/kube-scheduler:v1.20.0 locally: &{UncompressedImageCore:0xc0012ae018 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:55.246517   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-scheduler:v1.20.0 | grep 3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899"
	I0816 22:25:55.341693   11635 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/metrics-scraper:v1.0.4 | grep 86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4": (1.136152354s)
	I0816 22:25:55.341751   11635 cache_images.go:106] "docker.io/kubernetesui/metrics-scraper:v1.0.4" needs transfer: "docker.io/kubernetesui/metrics-scraper:v1.0.4" does not exist at hash "86262685d9abb35698a4e03ed13f9ded5b97c6c85b466285e4f367e5232eeee4" in container runtime
	I0816 22:25:55.341784   11635 cri.go:205] Removing image: docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0816 22:25:55.341831   11635 ssh_runner.go:149] Run: which crictl
	I0816 22:25:55.428834   11635 ssh_runner.go:149] Run: sudo /bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:25:55.428936   11635 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/coredns:1.7.0 | grep bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16": (1.149568594s)
	I0816 22:25:55.428974   11635 cache_images.go:106] "k8s.gcr.io/coredns:1.7.0" needs transfer: "k8s.gcr.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0816 22:25:55.429009   11635 cri.go:205] Removing image: k8s.gcr.io/coredns:1.7.0
	I0816 22:25:55.429056   11635 ssh_runner.go:149] Run: which crictl
	I0816 22:25:55.592474   11635 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/pause_3.2 from cache
	I0816 22:25:56.011636   11635 ssh_runner.go:149] Run: sudo /bin/crictl rmi docker.io/kubernetesui/metrics-scraper:v1.0.4
	I0816 22:25:56.011761   11635 cache_images.go:106] "k8s.gcr.io/kube-scheduler:v1.20.0" needs transfer: "k8s.gcr.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0816 22:25:56.011800   11635 cri.go:205] Removing image: k8s.gcr.io/kube-scheduler:v1.20.0
	I0816 22:25:56.011831   11635 ssh_runner.go:149] Run: which crictl
	I0816 22:25:56.011930   11635 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/coredns:1.7.0
	I0816 22:25:56.012015   11635 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5
	I0816 22:25:56.012087   11635 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0816 22:25:56.037389   11635 ssh_runner.go:306] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0816 22:25:56.037424   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (10569216 bytes)
	I0816 22:25:56.164064   11635 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-scheduler:v1.20.0
	I0816 22:25:56.164139   11635 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0
	I0816 22:25:56.164233   11635 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_1.7.0
	I0816 22:25:56.223803   11635 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4
	I0816 22:25:56.223987   11635 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0816 22:25:56.276689   11635 containerd.go:280] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0816 22:25:56.276940   11635 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I0816 22:25:56.518841   11635 image.go:171] found k8s.gcr.io/kube-apiserver:v1.20.0 locally: &{UncompressedImageCore:0xc0012ae030 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:56.519003   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.20.0 | grep ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99"
	I0816 22:25:56.539546   11635 image.go:171] found k8s.gcr.io/kube-controller-manager:v1.20.0 locally: &{UncompressedImageCore:0xc0001b80d0 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:56.539619   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.20.0 | grep b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080"
	I0816 22:25:56.600467   11635 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0
	I0816 22:25:56.600552   11635 ssh_runner.go:306] existence check for /var/lib/minikube/images/coredns_1.7.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_1.7.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/coredns_1.7.0': No such file or directory
	I0816 22:25:56.600588   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 --> /var/lib/minikube/images/coredns_1.7.0 (16093184 bytes)
	I0816 22:25:56.600662   11635 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.20.0
	I0816 22:25:56.600681   11635 ssh_runner.go:306] existence check for /var/lib/minikube/images/metrics-scraper_v1.0.4: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/metrics-scraper_v1.0.4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/metrics-scraper_v1.0.4': No such file or directory
	I0816 22:25:56.600753   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 --> /var/lib/minikube/images/metrics-scraper_v1.0.4 (17437696 bytes)
	I0816 22:25:57.931744   11635 image.go:171] found k8s.gcr.io/kube-proxy:v1.20.0 locally: &{UncompressedImageCore:0xc0012ae028 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:57.931827   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-proxy:v1.20.0 | grep 10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc"
	I0816 22:25:58.267238   11635 image.go:171] found k8s.gcr.io/etcd:3.4.13-0 locally: &{UncompressedImageCore:0xc0012ae030 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:58.267315   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/etcd:3.4.13-0 | grep 0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934"
	I0816 22:25:58.802005   11635 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-apiserver:v1.20.0 | grep ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99": (2.28292238s)
	I0816 22:25:58.802034   11635 ssh_runner.go:189] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.20.0: (2.201335079s)
	I0816 22:25:58.802028   11635 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep k8s.gcr.io/kube-controller-manager:v1.20.0 | grep b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080": (2.262384808s)
	I0816 22:25:58.802063   11635 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-scheduler_v1.20.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.20.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-scheduler_v1.20.0': No such file or directory
	I0816 22:25:58.802086   11635 cache_images.go:106] "k8s.gcr.io/kube-controller-manager:v1.20.0" needs transfer: "k8s.gcr.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0816 22:25:58.802108   11635 cache_images.go:106] "k8s.gcr.io/etcd:3.4.13-0" needs transfer: "k8s.gcr.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0816 22:25:58.802116   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 --> /var/lib/minikube/images/kube-scheduler_v1.20.0 (16235008 bytes)
	I0816 22:25:58.802125   11635 cri.go:205] Removing image: k8s.gcr.io/kube-controller-manager:v1.20.0
	I0816 22:25:58.802143   11635 cri.go:205] Removing image: k8s.gcr.io/etcd:3.4.13-0
	I0816 22:25:58.802168   11635 ssh_runner.go:149] Run: which crictl
	I0816 22:25:58.802008   11635 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5: (2.524986382s)
	I0816 22:25:58.802185   11635 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0816 22:25:58.802081   11635 cache_images.go:106] "k8s.gcr.io/kube-proxy:v1.20.0" needs transfer: "k8s.gcr.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0816 22:25:58.802207   11635 containerd.go:280] Loading image: /var/lib/minikube/images/coredns_1.7.0
	I0816 22:25:58.802075   11635 cache_images.go:106] "k8s.gcr.io/kube-apiserver:v1.20.0" needs transfer: "k8s.gcr.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0816 22:25:58.802230   11635 cri.go:205] Removing image: k8s.gcr.io/kube-apiserver:v1.20.0
	I0816 22:25:58.802208   11635 cri.go:205] Removing image: k8s.gcr.io/kube-proxy:v1.20.0
	I0816 22:25:58.802249   11635 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_1.7.0
	I0816 22:25:58.802255   11635 ssh_runner.go:149] Run: which crictl
	I0816 22:25:58.802187   11635 ssh_runner.go:149] Run: which crictl
	I0816 22:25:58.802295   11635 ssh_runner.go:149] Run: which crictl
	I0816 22:25:58.829751   11635 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/etcd:3.4.13-0
	I0816 22:25:58.829766   11635 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-proxy:v1.20.0
	I0816 22:25:58.829831   11635 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.20.0
	I0816 22:25:58.829880   11635 ssh_runner.go:149] Run: sudo /bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.20.0
	I0816 22:25:59.002701   11635 image.go:171] found index.docker.io/kubernetesui/dashboard:v2.1.0 locally: &{UncompressedImageCore:0xc0001b81d0 lock:{state:0 sema:0} manifest:<nil>}
	I0816 22:25:59.002760   11635 ssh_runner.go:149] Run: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/dashboard:v2.1.0 | grep 9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db"
	I0816 22:25:59.516257   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.015687   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.516496   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:01.016103   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:01.516235   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:02.015652   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:02.515708   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:03.015981   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:03.516508   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:04.015735   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:00.792096   11635 ssh_runner.go:189] Completed: sudo /bin/crictl rmi k8s.gcr.io/etcd:3.4.13-0: (1.962308151s)
	I0816 22:26:00.792126   11635 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0
	I0816 22:26:00.792155   11635 ssh_runner.go:189] Completed: sudo /bin/crictl rmi k8s.gcr.io/kube-proxy:v1.20.0: (1.962353995s)
	I0816 22:26:00.792172   11635 ssh_runner.go:189] Completed: sudo /bin/crictl rmi k8s.gcr.io/kube-controller-manager:v1.20.0: (1.962317588s)
	I0816 22:26:00.792195   11635 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0
	I0816 22:26:00.792204   11635 ssh_runner.go:189] Completed: sudo /bin/crictl rmi k8s.gcr.io/kube-apiserver:v1.20.0: (1.962305146s)
	I0816 22:26:00.792215   11635 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0
	I0816 22:26:00.792223   11635 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.4.13-0
	I0816 22:26:00.792269   11635 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.20.0
	I0816 22:26:00.792280   11635 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.20.0
	I0816 22:26:00.792285   11635 ssh_runner.go:189] Completed: /bin/bash -c "sudo ctr -n=k8s.io images check | grep docker.io/kubernetesui/dashboard:v2.1.0 | grep 9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db": (1.789512986s)
	I0816 22:26:00.792179   11635 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0
	I0816 22:26:00.792315   11635 cache_images.go:106] "docker.io/kubernetesui/dashboard:v2.1.0" needs transfer: "docker.io/kubernetesui/dashboard:v2.1.0" does not exist at hash "9a07b5b4bfac07e5cfc27f76c34516a3ad2fdfa3f683f375141fe662ef2e72db" in container runtime
	I0816 22:26:00.792342   11635 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.20.0
	I0816 22:26:00.792348   11635 cri.go:205] Removing image: docker.io/kubernetesui/dashboard:v2.1.0
	I0816 22:26:00.792375   11635 ssh_runner.go:149] Run: which crictl
	I0816 22:26:00.792536   11635 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/coredns_1.7.0: (1.990265666s)
	I0816 22:26:00.792554   11635 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/coredns_1.7.0 from cache
	I0816 22:26:00.792586   11635 containerd.go:280] Loading image: /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0816 22:26:00.792613   11635 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/metrics-scraper_v1.0.4
	I0816 22:26:00.801855   11635 ssh_runner.go:149] Run: sudo /bin/crictl rmi docker.io/kubernetesui/dashboard:v2.1.0
	I0816 22:26:00.806530   11635 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-apiserver_v1.20.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.20.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-apiserver_v1.20.0': No such file or directory
	I0816 22:26:00.806555   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 --> /var/lib/minikube/images/kube-apiserver_v1.20.0 (36263424 bytes)
	I0816 22:26:00.818668   11635 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-proxy_v1.20.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.20.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-proxy_v1.20.0': No such file or directory
	I0816 22:26:00.818702   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 --> /var/lib/minikube/images/kube-proxy_v1.20.0 (54292992 bytes)
	I0816 22:26:00.818715   11635 ssh_runner.go:306] existence check for /var/lib/minikube/images/kube-controller-manager_v1.20.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.20.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/kube-controller-manager_v1.20.0': No such file or directory
	I0816 22:26:00.818740   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 --> /var/lib/minikube/images/kube-controller-manager_v1.20.0 (34932224 bytes)
	I0816 22:26:00.818755   11635 ssh_runner.go:306] existence check for /var/lib/minikube/images/etcd_3.4.13-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.4.13-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/etcd_3.4.13-0': No such file or directory
	I0816 22:26:00.818785   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 --> /var/lib/minikube/images/etcd_3.4.13-0 (98416128 bytes)
	I0816 22:26:03.153476   11635 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/metrics-scraper_v1.0.4: (2.360838076s)
	I0816 22:26:03.153518   11635 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.4 from cache
	I0816 22:26:03.153527   11635 ssh_runner.go:189] Completed: sudo /bin/crictl rmi docker.io/kubernetesui/dashboard:v2.1.0: (2.351640753s)
	I0816 22:26:03.153547   11635 cache_images.go:276] Loading image from: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0
	I0816 22:26:03.153557   11635 containerd.go:280] Loading image: /var/lib/minikube/images/kube-scheduler_v1.20.0
	I0816 22:26:03.153609   11635 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-scheduler_v1.20.0
	I0816 22:26:03.153617   11635 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.1.0
	I0816 22:26:03.598649   11635 ssh_runner.go:306] existence check for /var/lib/minikube/images/dashboard_v2.1.0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/dashboard_v2.1.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/dashboard_v2.1.0': No such file or directory
	I0816 22:26:03.598692   11635 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.1.0 --> /var/lib/minikube/images/dashboard_v2.1.0 (78078976 bytes)
	I0816 22:26:03.598720   11635 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.20.0 from cache
	I0816 22:26:03.598766   11635 containerd.go:280] Loading image: /var/lib/minikube/images/kube-apiserver_v1.20.0
	I0816 22:26:03.598815   11635 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.20.0
	I0816 22:26:04.686988   11635 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.20.0: (1.088145061s)
	I0816 22:26:04.687017   11635 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.20.0 from cache
	I0816 22:26:04.687041   11635 containerd.go:280] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.20.0
	I0816 22:26:04.687104   11635 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.20.0
	I0816 22:26:04.516518   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:05.015801   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:05.515762   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:06.015635   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:06.515924   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:07.016271   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:07.515686   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:08.015901   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:08.516494   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:09.016345   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:05.647509   11635 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.20.0 from cache
	I0816 22:26:05.647560   11635 containerd.go:280] Loading image: /var/lib/minikube/images/kube-proxy_v1.20.0
	I0816 22:26:05.647610   11635 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.20.0
	I0816 22:26:09.515714   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:10.016162   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:10.516131   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:11.016215   10879 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:26:11.036151   10879 api_server.go:70] duration metric: took 37.544016672s to wait for apiserver process to appear ...
	I0816 22:26:11.036182   10879 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:26:11.036193   10879 api_server.go:239] Checking apiserver healthz at https://192.168.116.91:8443/healthz ...
	I0816 22:26:09.951929   11635 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.20.0: (4.304292322s)
	I0816 22:26:09.951958   11635 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.20.0 from cache
	I0816 22:26:09.951981   11635 containerd.go:280] Loading image: /var/lib/minikube/images/etcd_3.4.13-0
	I0816 22:26:09.952023   11635 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.4.13-0
	I0816 22:26:12.919196   11635 ssh_runner.go:189] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.4.13-0: (2.96714421s)
	I0816 22:26:12.919226   11635 cache_images.go:305] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/images/k8s.gcr.io/etcd_3.4.13-0 from cache
	I0816 22:26:12.919245   11635 containerd.go:280] Loading image: /var/lib/minikube/images/dashboard_v2.1.0
	I0816 22:26:12.919292   11635 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/dashboard_v2.1.0
	I0816 22:26:16.037406   10879 api_server.go:255] stopped: https://192.168.116.91:8443/healthz: Get "https://192.168.116.91:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 22:26:16.538167   10879 api_server.go:239] Checking apiserver healthz at https://192.168.116.91:8443/healthz ...
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	f04c445038901       6e38f40d628db       About a minute ago   Exited              storage-provisioner       0                   0e10d9862204b
	e70dd80568a0a       296a6d5035e2d       About a minute ago   Running             coredns                   1                   c649190b7c07d
	2585772c8a261       adb2816ea823a       About a minute ago   Running             kube-proxy                2                   d73b4cafe25f0
	53780b2759956       3d174f00aa39e       About a minute ago   Running             kube-apiserver            2                   fb9f201b2c2e1
	76fef890edebe       6be0dc1302e30       About a minute ago   Running             kube-scheduler            2                   1718d2a0276ce
	69a7fab4848c4       0369cf4303ffd       About a minute ago   Running             etcd                      2                   3b9459ff3a0d8
	825e79d62718c       bc2bb319a7038       About a minute ago   Running             kube-controller-manager   2                   feab707eb735a
	7626b842ef886       3d174f00aa39e       About a minute ago   Created             kube-apiserver            1                   fb9f201b2c2e1
	9d9f34b35e099       adb2816ea823a       About a minute ago   Created             kube-proxy                1                   d73b4cafe25f0
	97c4cc3614116       6be0dc1302e30       About a minute ago   Created             kube-scheduler            1                   1718d2a0276ce
	3644e35e40a2f       0369cf4303ffd       About a minute ago   Created             etcd                      1                   3b9459ff3a0d8
	8c5f2c007cff4       bc2bb319a7038       About a minute ago   Created             kube-controller-manager   1                   feab707eb735a
	28c7161cd49a4       296a6d5035e2d       2 minutes ago        Exited              coredns                   0                   05c2427240818
	a8503bd796d5d       adb2816ea823a       2 minutes ago        Exited              kube-proxy                0                   a86c3b6ee3a70
	124fa393359f7       0369cf4303ffd       3 minutes ago        Exited              etcd                      0                   94a493a65b593
	8710cefecdbe5       6be0dc1302e30       3 minutes ago        Exited              kube-scheduler            0                   982e66890a90d
	38dc61b214a9c       3d174f00aa39e       3 minutes ago        Exited              kube-apiserver            0                   630ed9d4644e9
	
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2021-08-16 22:22:35 UTC, end at Mon 2021-08-16 22:26:19 UTC. --
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.233198734Z" level=info msg="StartContainer for \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.443953465Z" level=info msg="StopPodSandbox for \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.444769081Z" level=info msg="Container to stop \"28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.461877463Z" level=info msg="StartContainer for \"2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2\" returns successfully"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536079353Z" level=info msg="TearDown network for sandbox \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\" successfully"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536191167Z" level=info msg="StopPodSandbox for \"05c24272408181b9c89f41ac96a6fc411fd43bae5540d12b31e720843bc7e126\" returns successfully"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.536962082Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-gkxhz,Uid:5aa76749-775e-423d-bbf9-680a20a27051,Namespace:kube-system,Attempt:1,}"
	Aug 16 22:25:00 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:00.776744568Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb pid=5007
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.290447333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-558bd4d5db-gkxhz,Uid:5aa76749-775e-423d-bbf9-680a20a27051,Namespace:kube-system,Attempt:1,} returns sandbox id \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\""
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.300600113Z" level=info msg="CreateContainer within sandbox \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.389478760Z" level=info msg="CreateContainer within sandbox \"c649190b7c07d0ba92b576298de36400d8063705ffd20276220e5c8242266ffb\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\""
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.397162604Z" level=info msg="StartContainer for \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\""
	Aug 16 22:25:01 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:01.594046909Z" level=info msg="StartContainer for \"e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a\" returns successfully"
	Aug 16 22:25:11 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:11.852957632Z" level=info msg="RunPodsandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:4f138dc7-da0e-4775-b4de-b0f7d616b212,Namespace:kube-system,Attempt:0,}"
	Aug 16 22:25:11 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:11.903771908Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c pid=5174
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.439549893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:4f138dc7-da0e-4775-b4de-b0f7d616b212,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\""
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.451930506Z" level=info msg="CreateContainer within sandbox \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.521875733Z" level=info msg="CreateContainer within sandbox \"0e10d9862204bac2c3d144d60c8458628ae4bf9e9fab64e40f4b937b6646804c\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.523292924Z" level=info msg="StartContainer for \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
	Aug 16 22:25:12 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:12.851898064Z" level=info msg="StartContainer for \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\" returns successfully"
	Aug 16 22:25:32 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:32.698376142Z" level=info msg="Finish piping stderr of container \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
	Aug 16 22:25:32 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:32.700549219Z" level=info msg="Finish piping stdout of container \"f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a\""
	Aug 16 22:25:32 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:32.702928257Z" level=info msg="TaskExit event &TaskExit{ContainerID:f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a,ID:f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a,Pid:5251,ExitStatus:255,ExitedAt:2021-08-16 22:25:32.702245647 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:25:32 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:32.834950240Z" level=info msg="shim disconnected" id=f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a
	Aug 16 22:25:32 pause-20210816222224-6986 containerd[3803]: time="2021-08-16T22:25:32.835568670Z" level=error msg="copy shim log" error="read /proc/self/fd/118: file already closed"
	
	* 
	* ==> coredns [28c7161cd49a472686f2bb046fb5ac4c661d9fcd9e5e84116ea611194f5a22a0] <==
	* I0816 22:24:19.170128       1 trace.go:205] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.168) (total time: 30001ms):
	Trace[2019727887]: [30.001909435s] [30.001909435s] END
	E0816 22:24:19.170279       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Endpoints: failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0816 22:24:19.171047       1 trace.go:205] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.166) (total time: 30004ms):
	Trace[939984059]: [30.004733433s] [30.004733433s] END
	E0816 22:24:19.171149       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0816 22:24:19.171258       1 trace.go:205] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156 (16-Aug-2021 22:23:49.166) (total time: 30004ms):
	Trace[911902081]: [30.004945736s] [30.004945736s] END
	E0816 22:24:19.171265       1 reflector.go:127] pkg/mod/k8s.io/client-go@v0.19.2/tools/cache/reflector.go:156: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> coredns [e70dd80568a0a134cd147b42c9c85b176b8e57570012074e1f92a3b1a94bab9a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = 7ae91e86dd75dee9ae501cb58003198b
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* Name:               pause-20210816222224-6986
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-20210816222224-6986
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48
	                    minikube.k8s.io/name=pause-20210816222224-6986
	                    minikube.k8s.io/updated_at=2021_08_16T22_23_26_0700
	                    minikube.k8s.io/version=v1.22.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Aug 2021 22:23:23 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-20210816222224-6986
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Aug 2021 22:25:19 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 16 Aug 2021 22:24:59 +0000   Mon, 16 Aug 2021 22:26:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 16 Aug 2021 22:24:59 +0000   Mon, 16 Aug 2021 22:26:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 16 Aug 2021 22:24:59 +0000   Mon, 16 Aug 2021 22:26:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 16 Aug 2021 22:24:59 +0000   Mon, 16 Aug 2021 22:26:02 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.50.226
	  Hostname:    pause-20210816222224-6986
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2033044Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2033044Ki
	  pods:               110
	System Info:
	  Machine ID:                 940ad300f94c41e2a0b0cde81be11541
	  System UUID:                940ad300-f94c-41e2-a0b0-cde81be11541
	  Boot ID:                    ea001a4b-e783-4f93-b7d3-bb910eb45d3c
	  Kernel Version:             4.19.182
	  OS Image:                   Buildroot 2020.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.4.9
	  Kubelet Version:            v1.21.3
	  Kube-Proxy Version:         v1.21.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-558bd4d5db-gkxhz                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m34s
	  kube-system                 etcd-pause-20210816222224-6986                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m56s
	  kube-system                 kube-apiserver-pause-20210816222224-6986             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m54s
	  kube-system                 kube-controller-manager-pause-20210816222224-6986    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 kube-proxy-7l59t                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                 kube-scheduler-pause-20210816222224-6986             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m48s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  3m8s (x6 over 3m9s)  kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m8s (x5 over 3m9s)  kubelet     Node pause-20210816222224-6986 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m8s (x5 over 3m9s)  kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m48s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m48s                kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m48s                kubelet     Node pause-20210816222224-6986 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m48s                kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m48s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m38s                kubelet     Node pause-20210816222224-6986 status is now: NodeReady
	  Normal  Starting                 2m31s                kube-proxy  Starting kube-proxy.
	  Normal  Starting                 89s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  89s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  88s (x8 over 89s)    kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s (x8 over 89s)    kubelet     Node pause-20210816222224-6986 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s (x7 over 89s)    kubelet     Node pause-20210816222224-6986 status is now: NodeHasSufficientPID
	  Normal  Starting                 80s                  kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.006251] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.889854] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[ +16.286436] systemd-fstab-generator[2098]: Ignoring "noauto" for root device
	[  +0.258185] systemd-fstab-generator[2128]: Ignoring "noauto" for root device
	[  +0.135377] systemd-fstab-generator[2143]: Ignoring "noauto" for root device
	[  +0.180446] systemd-fstab-generator[2173]: Ignoring "noauto" for root device
	[Aug16 22:23] systemd-fstab-generator[2381]: Ignoring "noauto" for root device
	[ +20.504547] systemd-fstab-generator[2808]: Ignoring "noauto" for root device
	[ +20.717915] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.551219] kauditd_printk_skb: 104 callbacks suppressed
	[Aug16 22:24] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.792051] systemd-fstab-generator[3754]: Ignoring "noauto" for root device
	[  +0.176916] systemd-fstab-generator[3767]: Ignoring "noauto" for root device
	[  +0.230657] systemd-fstab-generator[3792]: Ignoring "noauto" for root device
	[  +4.083098] kauditd_printk_skb: 2 callbacks suppressed
	[  +3.840195] NFSD: Unable to end grace period: -110
	[  +4.324119] systemd-fstab-generator[4543]: Ignoring "noauto" for root device
	[  +6.680726] kauditd_printk_skb: 29 callbacks suppressed
	[Aug16 22:25] kauditd_printk_skb: 14 callbacks suppressed
	[ +12.641213] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.313576] systemd-fstab-generator[5666]: Ignoring "noauto" for root device
	[  +0.847782] systemd-fstab-generator[5723]: Ignoring "noauto" for root device
	[  +1.051927] systemd-fstab-generator[5775]: Ignoring "noauto" for root device
	[Aug16 22:26] systemd-fstab-generator[6446]: Ignoring "noauto" for root device
	[  +0.762421] systemd-fstab-generator[6474]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [124fa393359f758ea47161b345d2cab4ce486d4473a4caad483449464d44315f] <==
	* 2021-08-16 22:23:41.064197 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-pause-20210816222224-6986\" " with result "range_response_count:1 size:5052" took too long (6.421187445s) to execute
	2021-08-16 22:23:41.065847 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:1 size:341" took too long (6.446897155s) to execute
	2021-08-16 22:23:41.066285 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/node-controller\" " with result "range_response_count:1 size:242" took too long (5.09674902s) to execute
	2021-08-16 22:23:41.068005 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (6.28196539s) to execute
	2021-08-16 22:23:41.068259 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (763.710719ms) to execute
	2021-08-16 22:23:41.880435 W | etcdserver: read-only range request "key:\"/registry/masterleases/192.168.50.226\" " with result "range_response_count:0 size:5" took too long (776.335267ms) to execute
	2021-08-16 22:23:41.881080 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (597.366064ms) to execute
	2021-08-16 22:23:41.882354 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:4569" took too long (763.841142ms) to execute
	2021-08-16 22:23:41.883287 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (621.677263ms) to execute
	2021-08-16 22:23:41.884722 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (481.499599ms) to execute
	2021-08-16 22:23:41.885189 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-pause-20210816222224-6986\" " with result "range_response_count:1 size:5421" took too long (772.180278ms) to execute
	2021-08-16 22:23:42.453217 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (290.061418ms) to execute
	2021-08-16 22:23:42.455427 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:212" took too long (285.893643ms) to execute
	2021-08-16 22:23:42.456943 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-pause-20210816222224-6986\" " with result "range_response_count:1 size:6314" took too long (153.946258ms) to execute
	2021-08-16 22:23:42.458024 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (177.825431ms) to execute
	2021-08-16 22:23:44.267832 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:23:54.092150 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (701.802797ms) to execute
	2021-08-16 22:23:54.093518 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-ctgf5\" " with result "range_response_count:1 size:4473" took too long (1.090386256s) to execute
	2021-08-16 22:23:54.267392 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:23:57.768234 W | etcdserver: request "header:<ID:4263355585347158035 > lease_revoke:<id:3b2a7b510fcb7e67>" with result "size:29" took too long (771.90226ms) to execute
	2021-08-16 22:23:57.768903 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (374.444829ms) to execute
	2021-08-16 22:23:57.769379 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-ctgf5\" " with result "range_response_count:1 size:4473" took too long (765.115046ms) to execute
	2021-08-16 22:24:04.267548 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:24:14.267958 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:24:24.268321 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> etcd [3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423] <==
	* 
	* ==> etcd [69a7fab4848c4475884a0a3e91f7d9f020c7159e916b98d8952d24a322486549] <==
	* 2021-08-16 22:24:53.773065 W | auth: simple token is not cryptographically signed
	2021-08-16 22:24:53.837118 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	raft2021/08/16 22:24:53 INFO: e840193bf29c3b2a switched to configuration voters=(16735403960572853034)
	2021-08-16 22:24:53.849298 I | etcdserver/membership: added member e840193bf29c3b2a [https://192.168.50.226:2380] to cluster 99b90e1bea73c730
	2021-08-16 22:24:53.860198 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-16 22:24:53.864997 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-16 22:24:53.865214 I | embed: listening for peers on 192.168.50.226:2380
	2021-08-16 22:24:53.868083 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-16 22:24:53.871735 I | etcdserver/api: enabled capabilities for version 3.4
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a is starting a new election at term 2
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a became candidate at term 3
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a received MsgVoteResp from e840193bf29c3b2a at term 3
	raft2021/08/16 22:24:54 INFO: e840193bf29c3b2a became leader at term 3
	raft2021/08/16 22:24:54 INFO: raft.node: e840193bf29c3b2a elected leader e840193bf29c3b2a at term 3
	2021-08-16 22:24:54.968820 I | embed: ready to serve client requests
	2021-08-16 22:24:54.969394 I | etcdserver: published {Name:pause-20210816222224-6986 ClientURLs:[https://192.168.50.226:2379]} to cluster 99b90e1bea73c730
	2021-08-16 22:24:54.971284 I | embed: serving client requests on 192.168.50.226:2379
	2021-08-16 22:24:54.971462 I | embed: ready to serve client requests
	2021-08-16 22:24:54.973508 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-16 22:25:03.067902 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-558bd4d5db-gkxhz\" " with result "range_response_count:1 size:4860" took too long (140.807991ms) to execute
	2021-08-16 22:25:06.747736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:25:08.138740 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:25:10.645123 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/coredns\" " with result "range_response_count:1 size:3838" took too long (108.124514ms) to execute
	2021-08-16 22:25:10.645989 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:665" took too long (107.967343ms) to execute
	2021-08-16 22:25:18.137756 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  22:26:20 up 3 min,  0 users,  load average: 1.55, 1.37, 0.60
	Linux pause-20210816222224-6986 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [38dc61b214a9cbd019de4ca9ab52fb6baf728336de6d715df22b027522ad8b20] <==
	* I0816 22:23:41.890272       1 trace.go:205] Trace[914939944]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:23:41.117) (total time: 772ms):
	Trace[914939944]: [772.29448ms] [772.29448ms] END
	I0816 22:23:41.897880       1 trace.go:205] Trace[372773048]: "List" url:/api/v1/nodes,user-agent:kube-controller-manager/v1.21.3 (linux/amd64) kubernetes/ca643a4/system:serviceaccount:kube-system:node-controller,client:192.168.50.226,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:41.117) (total time: 780ms):
	Trace[372773048]: ---"Listing from storage done" 773ms (22:23:00.891)
	Trace[372773048]: [780.024685ms] [780.024685ms] END
	I0816 22:23:41.899245       1 trace.go:205] Trace[189474875]: "Get" url:/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-20210816222224-6986,user-agent:kubelet/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:192.168.50.226,accept:application/vnd.kubernetes.protobuf,application/json,protocol:HTTP/2.0 (16-Aug-2021 22:23:41.107) (total time: 791ms):
	Trace[189474875]: ---"About to write a response" 791ms (22:23:00.899)
	Trace[189474875]: [791.769473ms] [791.769473ms] END
	I0816 22:23:41.914143       1 trace.go:205] Trace[1803257945]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (16-Aug-2021 22:23:41.101) (total time: 812ms):
	Trace[1803257945]: ---"initial value restored" 795ms (22:23:00.897)
	Trace[1803257945]: [812.099383ms] [812.099383ms] END
	I0816 22:23:46.219827       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0816 22:23:46.322056       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0816 22:23:54.101003       1 trace.go:205] Trace[1429856954]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-ctgf5,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.50.1,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:53.002) (total time: 1098ms):
	Trace[1429856954]: ---"About to write a response" 1098ms (22:23:00.100)
	Trace[1429856954]: [1.0988209s] [1.0988209s] END
	I0816 22:23:56.194218       1 client.go:360] parsed scheme: "passthrough"
	I0816 22:23:56.194943       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 22:23:56.195388       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0816 22:23:57.770900       1 trace.go:205] Trace[2103117378]: "Get" url:/api/v1/namespaces/kube-system/pods/coredns-558bd4d5db-ctgf5,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.50.1,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:23:57.002) (total time: 767ms):
	Trace[2103117378]: ---"About to write a response" 767ms (22:23:00.770)
	Trace[2103117378]: [767.944134ms] [767.944134ms] END
	I0816 22:24:32.818404       1 client.go:360] parsed scheme: "passthrough"
	I0816 22:24:32.818597       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 22:24:32.818691       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	* 
	* ==> kube-apiserver [53780b27599568e32d56b0f3cc49cf3ee7f729f86a18ab7c1f7a144e2e6ea8cf] <==
	* I0816 22:25:01.108795       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0816 22:25:01.182177       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0816 22:25:01.279321       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0816 22:25:01.344553       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 22:25:01.382891       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 22:25:11.471022       1 controller.go:611] quota admission added evaluator for: endpoints
	I0816 22:25:13.002505       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 22:26:09.708590       1 client.go:360] parsed scheme: "passthrough"
	I0816 22:26:09.708813       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 22:26:09.708836       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0816 22:26:10.250820       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, context canceled]"
	E0816 22:26:10.251413       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:26:10.251945       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:26:10.252962       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0816 22:26:10.253318       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, context canceled]"
	E0816 22:26:10.253689       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:26:10.255306       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:26:10.287601       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0816 22:26:10.923951       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:26:10.924099       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:26:10.926309       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0816 22:26:10.928101       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0816 22:26:10.928591       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:26:10.929574       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:26:10.930759       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	
	* 
	* ==> kube-apiserver [7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612] <==
	* 
	* ==> kube-controller-manager [825e79d62718c82fae36a8f7ce435923b7a01e2351bd82cb886fa5b21deebee7] <==
	* W0816 22:26:05.643314       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolumeClaim ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643340       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643359       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PriorityClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643375       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.Ingress ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643397       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643403       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PodDisruptionBudget ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643441       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.RoleBinding ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643454       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.PodSecurityPolicy ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643483       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643499       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.NetworkPolicy ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643529       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ValidatingWebhookConfiguration ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643538       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.CSIStorageCapacity ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643571       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Endpoints ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643579       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ReplicationController ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643697       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ClusterRole ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:26:05.643952       1 reflector.go:436] k8s.io/client-go/metadata/metadatainformer/informer.go:90: watch of *v1.PartialObjectMetadata ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	I0816 22:26:11.004517       1 event.go:291] "Event occurred" object="pause-20210816222224-6986" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node pause-20210816222224-6986 status is now: NodeNotReady"
	I0816 22:26:11.080243       1 event.go:291] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:26:11.118447       1 event.go:291] "Event occurred" object="kube-system/coredns-558bd4d5db-gkxhz" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:26:11.168905       1 event.go:291] "Event occurred" object="kube-system/etcd-pause-20210816222224-6986" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:26:11.220112       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-pause-20210816222224-6986" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:26:11.265527       1 event.go:291] "Event occurred" object="kube-system/kube-controller-manager-pause-20210816222224-6986" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:26:11.294104       1 event.go:291] "Event occurred" object="kube-system/kube-proxy-7l59t" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0816 22:26:11.318132       1 node_lifecycle_controller.go:1164] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0816 22:26:11.318953       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-pause-20210816222224-6986" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	* 
	* ==> kube-controller-manager [8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4] <==
	* 
	* ==> kube-proxy [2585772c8a2613d7a74e14d800b857a56a792ecc34055875f6eeb2a93c0b66c2] <==
	* I0816 22:25:00.641886       1 node.go:172] Successfully retrieved node IP: 192.168.50.226
	I0816 22:25:00.641938       1 server_others.go:140] Detected node IP 192.168.50.226
	W0816 22:25:00.642012       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0816 22:25:00.805515       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0816 22:25:00.805539       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0816 22:25:00.805560       1 server_others.go:212] Using iptables Proxier.
	I0816 22:25:00.806059       1 server.go:643] Version: v1.21.3
	I0816 22:25:00.807251       1 config.go:315] Starting service config controller
	I0816 22:25:00.807281       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:25:00.807307       1 config.go:224] Starting endpoint slice config controller
	I0816 22:25:00.807313       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:25:00.812511       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 22:25:00.816722       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:25:00.907844       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 22:25:00.907906       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-proxy [9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d] <==
	* 
	* ==> kube-proxy [a8503bd796d5d979a6e1b8b5154986e8b77de391b4f091211451ea5f52808e52] <==
	* I0816 22:23:49.316430       1 node.go:172] Successfully retrieved node IP: 192.168.50.226
	I0816 22:23:49.316608       1 server_others.go:140] Detected node IP 192.168.50.226
	W0816 22:23:49.316822       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0816 22:23:49.402698       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0816 22:23:49.403462       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0816 22:23:49.404047       1 server_others.go:212] Using iptables Proxier.
	I0816 22:23:49.407950       1 server.go:643] Version: v1.21.3
	I0816 22:23:49.410864       1 config.go:315] Starting service config controller
	I0816 22:23:49.413112       1 config.go:224] Starting endpoint slice config controller
	I0816 22:23:49.419474       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:23:49.421254       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:23:49.413718       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:23:49.425958       1 shared_informer.go:247] Caches are synced for service config 
	W0816 22:23:49.425586       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:23:49.520425       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [76fef890edebee46dbc2d1cf2001c2a580431370d25097acd32a6548309ac6e1] <==
	* E0816 22:26:09.162177       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.50.226:8443/api/v1/persistentvolumes?resourceVersion=484": net/http: TLS handshake timeout
	I0816 22:26:09.270505       1 trace.go:205] Trace[979895767]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:25:59.268) (total time: 10002ms):
	Trace[979895767]: [10.002171145s] [10.002171145s] END
	E0816 22:26:09.270735       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.50.226:8443/apis/storage.k8s.io/v1/storageclasses?resourceVersion=567": net/http: TLS handshake timeout
	I0816 22:26:09.285882       1 trace.go:205] Trace[1750384971]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:25:59.283) (total time: 10002ms):
	Trace[1750384971]: [10.002067459s] [10.002067459s] END
	E0816 22:26:09.285914       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.50.226:8443/api/v1/nodes?resourceVersion=491": net/http: TLS handshake timeout
	I0816 22:26:09.299098       1 trace.go:205] Trace[1557489506]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:25:59.297) (total time: 10001ms):
	Trace[1557489506]: [10.001361328s] [10.001361328s] END
	E0816 22:26:09.299227       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.50.226:8443/api/v1/replicationcontrollers?resourceVersion=484": net/http: TLS handshake timeout
	I0816 22:26:09.446582       1 trace.go:205] Trace[1473987170]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:25:59.445) (total time: 10000ms):
	Trace[1473987170]: [10.000963367s] [10.000963367s] END
	E0816 22:26:09.446603       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.50.226:8443/apis/storage.k8s.io/v1/csinodes?resourceVersion=484": net/http: TLS handshake timeout
	I0816 22:26:09.496925       1 trace.go:205] Trace[1868762133]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:25:59.495) (total time: 10001ms):
	Trace[1868762133]: [10.001820772s] [10.001820772s] END
	E0816 22:26:09.496954       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.50.226:8443/api/v1/persistentvolumeclaims?resourceVersion=484": net/http: TLS handshake timeout
	I0816 22:26:09.612135       1 trace.go:205] Trace[1357747237]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:25:59.610) (total time: 10002ms):
	Trace[1357747237]: [10.002067456s] [10.002067456s] END
	E0816 22:26:09.612165       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.50.226:8443/api/v1/services?resourceVersion=484": net/http: TLS handshake timeout
	I0816 22:26:09.654977       1 trace.go:205] Trace[1687302369]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:25:59.653) (total time: 10001ms):
	Trace[1687302369]: [10.00115526s] [10.00115526s] END
	E0816 22:26:09.655006       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.50.226:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&resourceVersion=581": net/http: TLS handshake timeout
	I0816 22:26:09.756127       1 trace.go:205] Trace[2132808872]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:25:59.754) (total time: 10001ms):
	Trace[2132808872]: [10.001340189s] [10.001340189s] END
	E0816 22:26:09.756154       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.50.226:8443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=484": net/http: TLS handshake timeout
	
	* 
	* ==> kube-scheduler [8710cefecdbe5d31cd44e9ae3378bc08cbc56001326a1cb38026755196cac7d1] <==
	* E0816 22:23:21.172468       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:23:21.189536       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:23:21.300836       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.329219       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.448607       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:23:21.504104       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.504531       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:23:21.597849       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:23:21.612843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:23:21.671333       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:23:21.827198       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:23:21.852843       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:23:21.867015       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:21.910139       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:23:23.291774       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.356078       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.452841       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:23:23.464942       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:23:23.644764       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:23:23.649142       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.710606       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:23:23.980099       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:23:24.052112       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:23:24.168543       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0816 22:23:30.043826       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kube-scheduler [97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a] <==
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:22:35 UTC, end at Mon 2021-08-16 22:26:20 UTC. --
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.626851    6454 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.628485    6454 container_manager_linux.go:278] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.628976    6454 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.629246    6454 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.629435    6454 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.629715    6454 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.630062    6454 remote_runtime.go:62] parsed scheme: ""
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.630231    6454 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.630443    6454 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.630603    6454 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.632294    6454 remote_image.go:50] parsed scheme: ""
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.632534    6454 remote_image.go:50] scheme "" not registered, fallback to default scheme
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.632564    6454 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.632583    6454 clientconn.go:948] ClientConn switching balancer to "pick_first"
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.632816    6454 kubelet.go:404] "Attempting to sync node with API server"
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.632844    6454 kubelet.go:272] "Adding static pod path" path="/etc/kubernetes/manifests"
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.632889    6454 kubelet.go:283] "Adding apiserver pod source"
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.632922    6454 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.637422    6454 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="containerd" version="v1.4.9" apiVersion="v1alpha2"
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.679320    6454 apiserver.go:52] "Watching apiserver"
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: E0816 22:26:15.915559    6454 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]:         For verbose messaging see aws.Config.CredentialsChainVerboseErrors
	Aug 16 22:26:15 pause-20210816222224-6986 kubelet[6454]: I0816 22:26:15.916399    6454 server.go:1190] "Started kubelet"
	Aug 16 22:26:15 pause-20210816222224-6986 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:26:15 pause-20210816222224-6986 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> storage-provisioner [f04c4450389018cfad6006421ccab65709ddb813ec0cf24ed2ca27673444361a] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 80 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc00032b490, 0x0)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc00032b480)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0003e1260, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0003bd400, 0x18e5530, 0xc00032a600, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0001b7d20)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0001b7d20, 0x18b3d60, 0xc0001bb8c0, 0xc00038ff01, 0xc000136600)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0001b7d20, 0x3b9aca00, 0x0, 0x48ef01, 0xc000136600)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0001b7d20, 0x3b9aca00, 0xc000136600)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:26:20.148139   12138 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:26:20Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log\": lstat /var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:26:20Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log\\\": lstat /var/log/pods/kube-system_etcd-pause-20210816222224-6986_39b50dc67d48590b868ad1d518085815/etcd/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:26:20.336323   12138 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:26:20Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log\": lstat /var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:26:20Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log\\\": lstat /var/log/pods/kube-system_kube-apiserver-pause-20210816222224-6986_d054e2e5c9f71517b6c4713abc6b99a6/kube-apiserver/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:26:20.438764   12138 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:26:20Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log\": lstat /var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:26:20Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log\\\": lstat /var/log/pods/kube-system_kube-controller-manager-pause-20210816222224-6986_5ab6c2e6848a3710cdfd5b4cd1b2f01c/kube-controller-manager/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:26:20.520793   12138 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:26:20Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log\": lstat /var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:26:20Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log\\\": lstat /var/log/pods/kube-system_kube-proxy-7l59t_3c0e0899-31c1-477a-a6d4-2844091deea2/kube-proxy/1.log: no such file or directory\"\n\n** /stderr **"
	E0816 22:26:20.700577   12138 logs.go:190] command /bin/bash -c "sudo /bin/crictl logs --tail 25 97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a" failed with error: /bin/bash -c "sudo /bin/crictl logs --tail 25 97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a": Process exited with status 1
	stdout:
	
	stderr:
	time="2021-08-16T22:26:20Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log\": lstat /var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log: no such file or directory"
	 output: "\n** stderr ** \ntime=\"2021-08-16T22:26:20Z\" level=fatal msg=\"failed to try resolving symlinks in path \\\"/var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log\\\": lstat /var/log/pods/kube-system_kube-scheduler-pause-20210816222224-6986_3320df5e4c4e10145cfcc766b9e74fc4/kube-scheduler/1.log: no such file or directory\"\n\n** /stderr **"
	! unable to fetch logs for: etcd [3644e35e40a2f17fa3bcea105ee7bcbc9a5fc2249355f81012f2d858354bd423], kube-apiserver [7626b842ef886cb703fca4dd8825fe446fca1f126235dbf0837a389ae226b612], kube-controller-manager [8c5f2c007cff4bc8eaf2cb09e8c50d28be18550815227880a13b8c7c5ba3e5c4], kube-proxy [9d9f34b35e0991e704169b75d1e4ccd0b07217688f4208d90a92161254b1471d], kube-scheduler [97c4cc36141166a7b8f3a01663f4b774253f15560a91c9c8c502ba5911ed8a2a]

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestPause/serial/PauseAgain (10.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (107.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20210816223156-6986 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-20210816223156-6986 --alsologtostderr -v=1: exit status 80 (2.397982605s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-20210816223156-6986 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 22:42:42.592354   19613 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:42:42.593156   19613 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:42:42.593171   19613 out.go:311] Setting ErrFile to fd 2...
	I0816 22:42:42.593176   19613 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:42:42.593319   19613 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:42:42.593533   19613 out.go:305] Setting JSON to false
	I0816 22:42:42.593557   19613 mustload.go:65] Loading cluster: no-preload-20210816223156-6986
	I0816 22:42:42.594796   19613 config.go:177] Loaded profile config "no-preload-20210816223156-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0816 22:42:42.595583   19613 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:42.595663   19613 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:42.606863   19613 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43909
	I0816 22:42:42.607310   19613 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:42.607887   19613 main.go:130] libmachine: Using API Version  1
	I0816 22:42:42.607910   19613 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:42.608283   19613 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:42.608450   19613 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:42.611561   19613 host.go:66] Checking if "no-preload-20210816223156-6986" exists ...
	I0816 22:42:42.611965   19613 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:42.611999   19613 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:42.622660   19613 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46145
	I0816 22:42:42.623064   19613 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:42.623492   19613 main.go:130] libmachine: Using API Version  1
	I0816 22:42:42.623511   19613 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:42.623849   19613 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:42.624005   19613 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .DriverName
	I0816 22:42:42.624623   19613 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-20210816223156-6986 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0816 22:42:42.627371   19613 out.go:177] * Pausing node no-preload-20210816223156-6986 ... 
	I0816 22:42:42.627389   19613 host.go:66] Checking if "no-preload-20210816223156-6986" exists ...
	I0816 22:42:42.627686   19613 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:42.627723   19613 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:42.637956   19613 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46367
	I0816 22:42:42.638298   19613 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:42.638701   19613 main.go:130] libmachine: Using API Version  1
	I0816 22:42:42.638728   19613 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:42.639100   19613 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:42.639242   19613 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .DriverName
	I0816 22:42:42.639402   19613 ssh_runner.go:149] Run: systemctl --version
	I0816 22:42:42.639419   19613 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHHostname
	I0816 22:42:42.644546   19613 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:42.644863   19613 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:66:08", ip: ""} in network mk-no-preload-20210816223156-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:37:07 +0000 UTC Type:0 Mac:52:54:00:1e:66:08 Iaid: IPaddr:192.168.116.66 Prefix:24 Hostname:no-preload-20210816223156-6986 Clientid:01:52:54:00:1e:66:08}
	I0816 22:42:42.644900   19613 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined IP address 192.168.116.66 and MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:42.644974   19613 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHPort
	I0816 22:42:42.645137   19613 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHKeyPath
	I0816 22:42:42.645291   19613 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHUsername
	I0816 22:42:42.645421   19613 sshutil.go:53] new ssh client: &{IP:192.168.116.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816223156-6986/id_rsa Username:docker}
	I0816 22:42:42.744881   19613 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:42:42.757890   19613 pause.go:50] kubelet running: true
	I0816 22:42:42.757953   19613 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:42:43.004423   19613 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:42:43.004544   19613 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:42:43.150373   19613 cri.go:76] found id: "89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2"
	I0816 22:42:43.150400   19613 cri.go:76] found id: "e0bb5a872be5dee5a28ce3431d7728ab3b6d152dacd42c354ec65b03076e30d0"
	I0816 22:42:43.150405   19613 cri.go:76] found id: "4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5"
	I0816 22:42:43.150409   19613 cri.go:76] found id: "1e3071c5b1c506432365e271b588aadbfd2eda919a23dcea85d60022046acfb4"
	I0816 22:42:43.150413   19613 cri.go:76] found id: "cb12185bdacc363d50b453691deee789994d7ad81788b6df833ae986a27ce50a"
	I0816 22:42:43.150420   19613 cri.go:76] found id: "a5a4025cb261585737124bf5b619554e77896d6c93458ee1baf7d4a451c195db"
	I0816 22:42:43.150424   19613 cri.go:76] found id: "f7e4e0952db6f386f632efe7b9c51f084992c1ef01a488ff933fd58c8c0522f0"
	I0816 22:42:43.150427   19613 cri.go:76] found id: "c265ff52803b42d9b0ff6036d0ca9fe4b655c9875474baa9a0a3f7255d230e17"
	I0816 22:42:43.150431   19613 cri.go:76] found id: "f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2"
	I0816 22:42:43.150439   19613 cri.go:76] found id: "64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476"
	I0816 22:42:43.150444   19613 cri.go:76] found id: ""
	I0816 22:42:43.150500   19613 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0816 22:42:43.199954   19613 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"10155044a33d1a2066f40ea92c55b4176c520a2b073f79bdb473c013652977c8","pid":4729,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10155044a33d1a2066f40ea92c55b4176c520a2b073f79bdb473c013652977c8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10155044a33d1a2066f40ea92c55b4176c520a2b073f79bdb473c013652977c8/rootfs","created":"2021-08-16T22:41:58.076798055Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"10155044a33d1a2066f40ea92c55b4176c520a2b073f79bdb473c013652977c8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-20210816223156-6986_471096fb7e95e43637c465ace09ce732"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1e3071c5b1c506432365e271b588aadbfd2eda919a23dcea85d60022046acfb4","pid":4879,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1e3071c5b1c506
432365e271b588aadbfd2eda919a23dcea85d60022046acfb4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1e3071c5b1c506432365e271b588aadbfd2eda919a23dcea85d60022046acfb4/rootfs","created":"2021-08-16T22:41:59.3205451Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"fdbc7e84435328821d7f1b6d1eee3ccce44a4cdfbaf20d5c843145707da3faac"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"48a5117f3b82fb6acea368f95956203a0afe260e0d0e43a4af40c491a4706af6","pid":5814,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/48a5117f3b82fb6acea368f95956203a0afe260e0d0e43a4af40c491a4706af6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/48a5117f3b82fb6acea368f95956203a0afe260e0d0e43a4af40c491a4706af6/rootfs","created":"2021-08-16T22:42:27.462364827Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"48a5117f3b82fb6acea368f95956203a0afe260e0d0e43a4
af40c491a4706af6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_c8a5ee3a-f5df-4eeb-92b1-443ecef04ea2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5","pid":5383,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5/rootfs","created":"2021-08-16T22:42:23.112093805Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a2d86485038673f4321d61bfd4d731f0a0746472b180a2f808d2b959884579d2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"53b71c03b338b7fc65416496d2ec076d6ec7bd85f0afbf283b147b58501941bb","pid":5869,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53b71c03b3
38b7fc65416496d2ec076d6ec7bd85f0afbf283b147b58501941bb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53b71c03b338b7fc65416496d2ec076d6ec7bd85f0afbf283b147b58501941bb/rootfs","created":"2021-08-16T22:42:27.634037617Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"53b71c03b338b7fc65416496d2ec076d6ec7bd85f0afbf283b147b58501941bb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-4vvg9_1295b024-f6d7-4bfa-b763-0c0bee43cb71"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476","pid":6211,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476/rootfs","created":"2021-08-16T22:42:30.610892993Z","annotations":{"io.kubern
etes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"53b71c03b338b7fc65416496d2ec076d6ec7bd85f0afbf283b147b58501941bb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2","pid":6023,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2/rootfs","created":"2021-08-16T22:42:28.668202973Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"48a5117f3b82fb6acea368f95956203a0afe260e0d0e43a4af40c491a4706af6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a2d86485038673f4321d61bfd4d731f0a0746472b180a2f808d2b959884579d2","pid":5263,"status":"running","bundle":"/run/c
ontainerd/io.containerd.runtime.v2.task/k8s.io/a2d86485038673f4321d61bfd4d731f0a0746472b180a2f808d2b959884579d2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2d86485038673f4321d61bfd4d731f0a0746472b180a2f808d2b959884579d2/rootfs","created":"2021-08-16T22:42:22.447007755Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a2d86485038673f4321d61bfd4d731f0a0746472b180a2f808d2b959884579d2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-jhqbx_edacd358-46da-4db4-a8db-098f6edefb76"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a5a4025cb261585737124bf5b619554e77896d6c93458ee1baf7d4a451c195db","pid":4823,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a5a4025cb261585737124bf5b619554e77896d6c93458ee1baf7d4a451c195db","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a5a4025cb261585737124bf5b619554e77896d6c93458ee1baf7d4a451c195db/rootfs","created":"2021-08-16T22:41:58.97596251Z
","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"bbeb739aeb370f1d3a9ce15770f608057fbe200efc98901efb8dab150cdc0530"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a5ef942d9974feef7ad3431da2b0cfc95a8718a4ba6a9450740ecdf1febc6de3","pid":4736,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a5ef942d9974feef7ad3431da2b0cfc95a8718a4ba6a9450740ecdf1febc6de3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a5ef942d9974feef7ad3431da2b0cfc95a8718a4ba6a9450740ecdf1febc6de3/rootfs","created":"2021-08-16T22:41:58.097260973Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a5ef942d9974feef7ad3431da2b0cfc95a8718a4ba6a9450740ecdf1febc6de3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-20210816223156-6986_71ef3c733f161e4ee858bc77b08315e6"},"owner":"root"},{"ociVersion":"1.0.2-dev
","id":"a62073928ebf1d7d8d0a7654f1808f5abc0b090020ff16db9a3e169df156b6b5","pid":5752,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a62073928ebf1d7d8d0a7654f1808f5abc0b090020ff16db9a3e169df156b6b5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a62073928ebf1d7d8d0a7654f1808f5abc0b090020ff16db9a3e169df156b6b5/rootfs","created":"2021-08-16T22:42:27.233530817Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a62073928ebf1d7d8d0a7654f1808f5abc0b090020ff16db9a3e169df156b6b5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-dfjww_7a744b20-6d7f-4001-a322-7e5615cbf15f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bbeb739aeb370f1d3a9ce15770f608057fbe200efc98901efb8dab150cdc0530","pid":4723,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bbeb739aeb370f1d3a9ce15770f608057fbe200efc98901efb8dab150cdc0530","rootfs":"/run/containerd/io.containerd.r
untime.v2.task/k8s.io/bbeb739aeb370f1d3a9ce15770f608057fbe200efc98901efb8dab150cdc0530/rootfs","created":"2021-08-16T22:41:58.078701202Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bbeb739aeb370f1d3a9ce15770f608057fbe200efc98901efb8dab150cdc0530","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-20210816223156-6986_d7ad6ae9491e1156e0aa4b89ee3c2679"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cb12185bdacc363d50b453691deee789994d7ad81788b6df833ae986a27ce50a","pid":4863,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb12185bdacc363d50b453691deee789994d7ad81788b6df833ae986a27ce50a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb12185bdacc363d50b453691deee789994d7ad81788b6df833ae986a27ce50a/rootfs","created":"2021-08-16T22:41:59.373428519Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io
.kubernetes.cri.sandbox-id":"10155044a33d1a2066f40ea92c55b4176c520a2b073f79bdb473c013652977c8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd35b65dbf056e2a246c7f0327763ece3bf3d994f02dd6b9afd6e03f3ea48b74","pid":5985,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd35b65dbf056e2a246c7f0327763ece3bf3d994f02dd6b9afd6e03f3ea48b74","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd35b65dbf056e2a246c7f0327763ece3bf3d994f02dd6b9afd6e03f3ea48b74/rootfs","created":"2021-08-16T22:42:28.301010012Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cd35b65dbf056e2a246c7f0327763ece3bf3d994f02dd6b9afd6e03f3ea48b74","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-d2v4k_d2a31ab1-304a-4179-9e46-8625b64d8dc4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d3e8dbb1065a03fd63b9c5b4941d6a19dc60bcc0f70ee0d8d3ef12869d5d4a39","pid":5464,"status":"running","bundle":"/run
/containerd/io.containerd.runtime.v2.task/k8s.io/d3e8dbb1065a03fd63b9c5b4941d6a19dc60bcc0f70ee0d8d3ef12869d5d4a39","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3e8dbb1065a03fd63b9c5b4941d6a19dc60bcc0f70ee0d8d3ef12869d5d4a39/rootfs","created":"2021-08-16T22:42:23.457329732Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d3e8dbb1065a03fd63b9c5b4941d6a19dc60bcc0f70ee0d8d3ef12869d5d4a39","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-78fcd69978-9rlk6_83d3b042-5692-4c44-b6f2-65020120666e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e0bb5a872be5dee5a28ce3431d7728ab3b6d152dacd42c354ec65b03076e30d0","pid":5613,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e0bb5a872be5dee5a28ce3431d7728ab3b6d152dacd42c354ec65b03076e30d0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e0bb5a872be5dee5a28ce3431d7728ab3b6d152dacd42c354ec65b03076e30d0/rootfs","created":"2021-08-16T22:42:24
.711969859Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d3e8dbb1065a03fd63b9c5b4941d6a19dc60bcc0f70ee0d8d3ef12869d5d4a39"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f7e4e0952db6f386f632efe7b9c51f084992c1ef01a488ff933fd58c8c0522f0","pid":4790,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7e4e0952db6f386f632efe7b9c51f084992c1ef01a488ff933fd58c8c0522f0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7e4e0952db6f386f632efe7b9c51f084992c1ef01a488ff933fd58c8c0522f0/rootfs","created":"2021-08-16T22:41:58.805729358Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a5ef942d9974feef7ad3431da2b0cfc95a8718a4ba6a9450740ecdf1febc6de3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fdbc7e84435328821d7f1b6d1eee3ccce44a4cdfbaf20d5c843145707da3faac","pid":4709,"status":"runni
ng","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdbc7e84435328821d7f1b6d1eee3ccce44a4cdfbaf20d5c843145707da3faac","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdbc7e84435328821d7f1b6d1eee3ccce44a4cdfbaf20d5c843145707da3faac/rootfs","created":"2021-08-16T22:41:57.971119735Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"fdbc7e84435328821d7f1b6d1eee3ccce44a4cdfbaf20d5c843145707da3faac","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-20210816223156-6986_58ee2274ce0c14a00e4df1a99d5d675c"},"owner":"root"}]
	I0816 22:42:43.200190   19613 cri.go:113] list returned 18 containers
	I0816 22:42:43.200202   19613 cri.go:116] container: {ID:10155044a33d1a2066f40ea92c55b4176c520a2b073f79bdb473c013652977c8 Status:running}
	I0816 22:42:43.200216   19613 cri.go:118] skipping 10155044a33d1a2066f40ea92c55b4176c520a2b073f79bdb473c013652977c8 - not in ps
	I0816 22:42:43.200220   19613 cri.go:116] container: {ID:1e3071c5b1c506432365e271b588aadbfd2eda919a23dcea85d60022046acfb4 Status:running}
	I0816 22:42:43.200225   19613 cri.go:116] container: {ID:48a5117f3b82fb6acea368f95956203a0afe260e0d0e43a4af40c491a4706af6 Status:running}
	I0816 22:42:43.200230   19613 cri.go:118] skipping 48a5117f3b82fb6acea368f95956203a0afe260e0d0e43a4af40c491a4706af6 - not in ps
	I0816 22:42:43.200233   19613 cri.go:116] container: {ID:4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5 Status:running}
	I0816 22:42:43.200237   19613 cri.go:116] container: {ID:53b71c03b338b7fc65416496d2ec076d6ec7bd85f0afbf283b147b58501941bb Status:running}
	I0816 22:42:43.200242   19613 cri.go:118] skipping 53b71c03b338b7fc65416496d2ec076d6ec7bd85f0afbf283b147b58501941bb - not in ps
	I0816 22:42:43.200245   19613 cri.go:116] container: {ID:64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476 Status:running}
	I0816 22:42:43.200249   19613 cri.go:116] container: {ID:89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2 Status:running}
	I0816 22:42:43.200253   19613 cri.go:116] container: {ID:a2d86485038673f4321d61bfd4d731f0a0746472b180a2f808d2b959884579d2 Status:running}
	I0816 22:42:43.200257   19613 cri.go:118] skipping a2d86485038673f4321d61bfd4d731f0a0746472b180a2f808d2b959884579d2 - not in ps
	I0816 22:42:43.200260   19613 cri.go:116] container: {ID:a5a4025cb261585737124bf5b619554e77896d6c93458ee1baf7d4a451c195db Status:running}
	I0816 22:42:43.200264   19613 cri.go:116] container: {ID:a5ef942d9974feef7ad3431da2b0cfc95a8718a4ba6a9450740ecdf1febc6de3 Status:running}
	I0816 22:42:43.200271   19613 cri.go:118] skipping a5ef942d9974feef7ad3431da2b0cfc95a8718a4ba6a9450740ecdf1febc6de3 - not in ps
	I0816 22:42:43.200274   19613 cri.go:116] container: {ID:a62073928ebf1d7d8d0a7654f1808f5abc0b090020ff16db9a3e169df156b6b5 Status:running}
	I0816 22:42:43.200278   19613 cri.go:118] skipping a62073928ebf1d7d8d0a7654f1808f5abc0b090020ff16db9a3e169df156b6b5 - not in ps
	I0816 22:42:43.200284   19613 cri.go:116] container: {ID:bbeb739aeb370f1d3a9ce15770f608057fbe200efc98901efb8dab150cdc0530 Status:running}
	I0816 22:42:43.200288   19613 cri.go:118] skipping bbeb739aeb370f1d3a9ce15770f608057fbe200efc98901efb8dab150cdc0530 - not in ps
	I0816 22:42:43.200291   19613 cri.go:116] container: {ID:cb12185bdacc363d50b453691deee789994d7ad81788b6df833ae986a27ce50a Status:running}
	I0816 22:42:43.200297   19613 cri.go:116] container: {ID:cd35b65dbf056e2a246c7f0327763ece3bf3d994f02dd6b9afd6e03f3ea48b74 Status:running}
	I0816 22:42:43.200302   19613 cri.go:118] skipping cd35b65dbf056e2a246c7f0327763ece3bf3d994f02dd6b9afd6e03f3ea48b74 - not in ps
	I0816 22:42:43.200306   19613 cri.go:116] container: {ID:d3e8dbb1065a03fd63b9c5b4941d6a19dc60bcc0f70ee0d8d3ef12869d5d4a39 Status:running}
	I0816 22:42:43.200310   19613 cri.go:118] skipping d3e8dbb1065a03fd63b9c5b4941d6a19dc60bcc0f70ee0d8d3ef12869d5d4a39 - not in ps
	I0816 22:42:43.200314   19613 cri.go:116] container: {ID:e0bb5a872be5dee5a28ce3431d7728ab3b6d152dacd42c354ec65b03076e30d0 Status:running}
	I0816 22:42:43.200317   19613 cri.go:116] container: {ID:f7e4e0952db6f386f632efe7b9c51f084992c1ef01a488ff933fd58c8c0522f0 Status:running}
	I0816 22:42:43.200321   19613 cri.go:116] container: {ID:fdbc7e84435328821d7f1b6d1eee3ccce44a4cdfbaf20d5c843145707da3faac Status:running}
	I0816 22:42:43.200325   19613 cri.go:118] skipping fdbc7e84435328821d7f1b6d1eee3ccce44a4cdfbaf20d5c843145707da3faac - not in ps
	I0816 22:42:43.200364   19613 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 1e3071c5b1c506432365e271b588aadbfd2eda919a23dcea85d60022046acfb4
	I0816 22:42:43.224730   19613 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 1e3071c5b1c506432365e271b588aadbfd2eda919a23dcea85d60022046acfb4 4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5
	I0816 22:42:43.248323   19613 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 1e3071c5b1c506432365e271b588aadbfd2eda919a23dcea85d60022046acfb4 4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:42:43Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0816 22:42:43.524807   19613 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:42:43.536275   19613 pause.go:50] kubelet running: false
	I0816 22:42:43.536350   19613 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:42:43.713870   19613 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:42:43.713943   19613 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:42:43.835620   19613 cri.go:76] found id: "89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2"
	I0816 22:42:43.835655   19613 cri.go:76] found id: "e0bb5a872be5dee5a28ce3431d7728ab3b6d152dacd42c354ec65b03076e30d0"
	I0816 22:42:43.835660   19613 cri.go:76] found id: "4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5"
	I0816 22:42:43.835673   19613 cri.go:76] found id: "1e3071c5b1c506432365e271b588aadbfd2eda919a23dcea85d60022046acfb4"
	I0816 22:42:43.835678   19613 cri.go:76] found id: "cb12185bdacc363d50b453691deee789994d7ad81788b6df833ae986a27ce50a"
	I0816 22:42:43.835684   19613 cri.go:76] found id: "a5a4025cb261585737124bf5b619554e77896d6c93458ee1baf7d4a451c195db"
	I0816 22:42:43.835690   19613 cri.go:76] found id: "f7e4e0952db6f386f632efe7b9c51f084992c1ef01a488ff933fd58c8c0522f0"
	I0816 22:42:43.835696   19613 cri.go:76] found id: "c265ff52803b42d9b0ff6036d0ca9fe4b655c9875474baa9a0a3f7255d230e17"
	I0816 22:42:43.835705   19613 cri.go:76] found id: "f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2"
	I0816 22:42:43.835719   19613 cri.go:76] found id: "64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476"
	I0816 22:42:43.835727   19613 cri.go:76] found id: ""
	I0816 22:42:43.835775   19613 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0816 22:42:43.891223   19613 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"10155044a33d1a2066f40ea92c55b4176c520a2b073f79bdb473c013652977c8","pid":4729,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10155044a33d1a2066f40ea92c55b4176c520a2b073f79bdb473c013652977c8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10155044a33d1a2066f40ea92c55b4176c520a2b073f79bdb473c013652977c8/rootfs","created":"2021-08-16T22:41:58.076798055Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"10155044a33d1a2066f40ea92c55b4176c520a2b073f79bdb473c013652977c8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-20210816223156-6986_471096fb7e95e43637c465ace09ce732"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1e3071c5b1c506432365e271b588aadbfd2eda919a23dcea85d60022046acfb4","pid":4879,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1e3071c5b1c5064
32365e271b588aadbfd2eda919a23dcea85d60022046acfb4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1e3071c5b1c506432365e271b588aadbfd2eda919a23dcea85d60022046acfb4/rootfs","created":"2021-08-16T22:41:59.3205451Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"fdbc7e84435328821d7f1b6d1eee3ccce44a4cdfbaf20d5c843145707da3faac"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"48a5117f3b82fb6acea368f95956203a0afe260e0d0e43a4af40c491a4706af6","pid":5814,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/48a5117f3b82fb6acea368f95956203a0afe260e0d0e43a4af40c491a4706af6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/48a5117f3b82fb6acea368f95956203a0afe260e0d0e43a4af40c491a4706af6/rootfs","created":"2021-08-16T22:42:27.462364827Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"48a5117f3b82fb6acea368f95956203a0afe260e0d0e43a4a
f40c491a4706af6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_c8a5ee3a-f5df-4eeb-92b1-443ecef04ea2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5","pid":5383,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5/rootfs","created":"2021-08-16T22:42:23.112093805Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a2d86485038673f4321d61bfd4d731f0a0746472b180a2f808d2b959884579d2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"53b71c03b338b7fc65416496d2ec076d6ec7bd85f0afbf283b147b58501941bb","pid":5869,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53b71c03b33
8b7fc65416496d2ec076d6ec7bd85f0afbf283b147b58501941bb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53b71c03b338b7fc65416496d2ec076d6ec7bd85f0afbf283b147b58501941bb/rootfs","created":"2021-08-16T22:42:27.634037617Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"53b71c03b338b7fc65416496d2ec076d6ec7bd85f0afbf283b147b58501941bb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-4vvg9_1295b024-f6d7-4bfa-b763-0c0bee43cb71"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476","pid":6211,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476/rootfs","created":"2021-08-16T22:42:30.610892993Z","annotations":{"io.kuberne
tes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"53b71c03b338b7fc65416496d2ec076d6ec7bd85f0afbf283b147b58501941bb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2","pid":6023,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2/rootfs","created":"2021-08-16T22:42:28.668202973Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"48a5117f3b82fb6acea368f95956203a0afe260e0d0e43a4af40c491a4706af6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a2d86485038673f4321d61bfd4d731f0a0746472b180a2f808d2b959884579d2","pid":5263,"status":"running","bundle":"/run/co
ntainerd/io.containerd.runtime.v2.task/k8s.io/a2d86485038673f4321d61bfd4d731f0a0746472b180a2f808d2b959884579d2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2d86485038673f4321d61bfd4d731f0a0746472b180a2f808d2b959884579d2/rootfs","created":"2021-08-16T22:42:22.447007755Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a2d86485038673f4321d61bfd4d731f0a0746472b180a2f808d2b959884579d2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-jhqbx_edacd358-46da-4db4-a8db-098f6edefb76"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a5a4025cb261585737124bf5b619554e77896d6c93458ee1baf7d4a451c195db","pid":4823,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a5a4025cb261585737124bf5b619554e77896d6c93458ee1baf7d4a451c195db","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a5a4025cb261585737124bf5b619554e77896d6c93458ee1baf7d4a451c195db/rootfs","created":"2021-08-16T22:41:58.97596251Z"
,"annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"bbeb739aeb370f1d3a9ce15770f608057fbe200efc98901efb8dab150cdc0530"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a5ef942d9974feef7ad3431da2b0cfc95a8718a4ba6a9450740ecdf1febc6de3","pid":4736,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a5ef942d9974feef7ad3431da2b0cfc95a8718a4ba6a9450740ecdf1febc6de3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a5ef942d9974feef7ad3431da2b0cfc95a8718a4ba6a9450740ecdf1febc6de3/rootfs","created":"2021-08-16T22:41:58.097260973Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a5ef942d9974feef7ad3431da2b0cfc95a8718a4ba6a9450740ecdf1febc6de3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-20210816223156-6986_71ef3c733f161e4ee858bc77b08315e6"},"owner":"root"},{"ociVersion":"1.0.2-dev"
,"id":"a62073928ebf1d7d8d0a7654f1808f5abc0b090020ff16db9a3e169df156b6b5","pid":5752,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a62073928ebf1d7d8d0a7654f1808f5abc0b090020ff16db9a3e169df156b6b5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a62073928ebf1d7d8d0a7654f1808f5abc0b090020ff16db9a3e169df156b6b5/rootfs","created":"2021-08-16T22:42:27.233530817Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a62073928ebf1d7d8d0a7654f1808f5abc0b090020ff16db9a3e169df156b6b5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-dfjww_7a744b20-6d7f-4001-a322-7e5615cbf15f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bbeb739aeb370f1d3a9ce15770f608057fbe200efc98901efb8dab150cdc0530","pid":4723,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bbeb739aeb370f1d3a9ce15770f608057fbe200efc98901efb8dab150cdc0530","rootfs":"/run/containerd/io.containerd.ru
ntime.v2.task/k8s.io/bbeb739aeb370f1d3a9ce15770f608057fbe200efc98901efb8dab150cdc0530/rootfs","created":"2021-08-16T22:41:58.078701202Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bbeb739aeb370f1d3a9ce15770f608057fbe200efc98901efb8dab150cdc0530","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-20210816223156-6986_d7ad6ae9491e1156e0aa4b89ee3c2679"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cb12185bdacc363d50b453691deee789994d7ad81788b6df833ae986a27ce50a","pid":4863,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb12185bdacc363d50b453691deee789994d7ad81788b6df833ae986a27ce50a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb12185bdacc363d50b453691deee789994d7ad81788b6df833ae986a27ce50a/rootfs","created":"2021-08-16T22:41:59.373428519Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.
kubernetes.cri.sandbox-id":"10155044a33d1a2066f40ea92c55b4176c520a2b073f79bdb473c013652977c8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd35b65dbf056e2a246c7f0327763ece3bf3d994f02dd6b9afd6e03f3ea48b74","pid":5985,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd35b65dbf056e2a246c7f0327763ece3bf3d994f02dd6b9afd6e03f3ea48b74","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd35b65dbf056e2a246c7f0327763ece3bf3d994f02dd6b9afd6e03f3ea48b74/rootfs","created":"2021-08-16T22:42:28.301010012Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cd35b65dbf056e2a246c7f0327763ece3bf3d994f02dd6b9afd6e03f3ea48b74","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-d2v4k_d2a31ab1-304a-4179-9e46-8625b64d8dc4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d3e8dbb1065a03fd63b9c5b4941d6a19dc60bcc0f70ee0d8d3ef12869d5d4a39","pid":5464,"status":"running","bundle":"/run/
containerd/io.containerd.runtime.v2.task/k8s.io/d3e8dbb1065a03fd63b9c5b4941d6a19dc60bcc0f70ee0d8d3ef12869d5d4a39","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3e8dbb1065a03fd63b9c5b4941d6a19dc60bcc0f70ee0d8d3ef12869d5d4a39/rootfs","created":"2021-08-16T22:42:23.457329732Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d3e8dbb1065a03fd63b9c5b4941d6a19dc60bcc0f70ee0d8d3ef12869d5d4a39","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-78fcd69978-9rlk6_83d3b042-5692-4c44-b6f2-65020120666e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e0bb5a872be5dee5a28ce3431d7728ab3b6d152dacd42c354ec65b03076e30d0","pid":5613,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e0bb5a872be5dee5a28ce3431d7728ab3b6d152dacd42c354ec65b03076e30d0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e0bb5a872be5dee5a28ce3431d7728ab3b6d152dacd42c354ec65b03076e30d0/rootfs","created":"2021-08-16T22:42:24.
711969859Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d3e8dbb1065a03fd63b9c5b4941d6a19dc60bcc0f70ee0d8d3ef12869d5d4a39"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f7e4e0952db6f386f632efe7b9c51f084992c1ef01a488ff933fd58c8c0522f0","pid":4790,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7e4e0952db6f386f632efe7b9c51f084992c1ef01a488ff933fd58c8c0522f0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7e4e0952db6f386f632efe7b9c51f084992c1ef01a488ff933fd58c8c0522f0/rootfs","created":"2021-08-16T22:41:58.805729358Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a5ef942d9974feef7ad3431da2b0cfc95a8718a4ba6a9450740ecdf1febc6de3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fdbc7e84435328821d7f1b6d1eee3ccce44a4cdfbaf20d5c843145707da3faac","pid":4709,"status":"runnin
g","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdbc7e84435328821d7f1b6d1eee3ccce44a4cdfbaf20d5c843145707da3faac","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdbc7e84435328821d7f1b6d1eee3ccce44a4cdfbaf20d5c843145707da3faac/rootfs","created":"2021-08-16T22:41:57.971119735Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"fdbc7e84435328821d7f1b6d1eee3ccce44a4cdfbaf20d5c843145707da3faac","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-20210816223156-6986_58ee2274ce0c14a00e4df1a99d5d675c"},"owner":"root"}]
	I0816 22:42:43.891504   19613 cri.go:113] list returned 18 containers
	I0816 22:42:43.891522   19613 cri.go:116] container: {ID:10155044a33d1a2066f40ea92c55b4176c520a2b073f79bdb473c013652977c8 Status:running}
	I0816 22:42:43.891543   19613 cri.go:118] skipping 10155044a33d1a2066f40ea92c55b4176c520a2b073f79bdb473c013652977c8 - not in ps
	I0816 22:42:43.891551   19613 cri.go:116] container: {ID:1e3071c5b1c506432365e271b588aadbfd2eda919a23dcea85d60022046acfb4 Status:paused}
	I0816 22:42:43.891563   19613 cri.go:122] skipping {1e3071c5b1c506432365e271b588aadbfd2eda919a23dcea85d60022046acfb4 paused}: state = "paused", want "running"
	I0816 22:42:43.891581   19613 cri.go:116] container: {ID:48a5117f3b82fb6acea368f95956203a0afe260e0d0e43a4af40c491a4706af6 Status:running}
	I0816 22:42:43.891588   19613 cri.go:118] skipping 48a5117f3b82fb6acea368f95956203a0afe260e0d0e43a4af40c491a4706af6 - not in ps
	I0816 22:42:43.891597   19613 cri.go:116] container: {ID:4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5 Status:running}
	I0816 22:42:43.891608   19613 cri.go:116] container: {ID:53b71c03b338b7fc65416496d2ec076d6ec7bd85f0afbf283b147b58501941bb Status:running}
	I0816 22:42:43.891619   19613 cri.go:118] skipping 53b71c03b338b7fc65416496d2ec076d6ec7bd85f0afbf283b147b58501941bb - not in ps
	I0816 22:42:43.891625   19613 cri.go:116] container: {ID:64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476 Status:running}
	I0816 22:42:43.891635   19613 cri.go:116] container: {ID:89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2 Status:running}
	I0816 22:42:43.891645   19613 cri.go:116] container: {ID:a2d86485038673f4321d61bfd4d731f0a0746472b180a2f808d2b959884579d2 Status:running}
	I0816 22:42:43.891655   19613 cri.go:118] skipping a2d86485038673f4321d61bfd4d731f0a0746472b180a2f808d2b959884579d2 - not in ps
	I0816 22:42:43.891661   19613 cri.go:116] container: {ID:a5a4025cb261585737124bf5b619554e77896d6c93458ee1baf7d4a451c195db Status:running}
	I0816 22:42:43.891673   19613 cri.go:116] container: {ID:a5ef942d9974feef7ad3431da2b0cfc95a8718a4ba6a9450740ecdf1febc6de3 Status:running}
	I0816 22:42:43.891681   19613 cri.go:118] skipping a5ef942d9974feef7ad3431da2b0cfc95a8718a4ba6a9450740ecdf1febc6de3 - not in ps
	I0816 22:42:43.891691   19613 cri.go:116] container: {ID:a62073928ebf1d7d8d0a7654f1808f5abc0b090020ff16db9a3e169df156b6b5 Status:running}
	I0816 22:42:43.891701   19613 cri.go:118] skipping a62073928ebf1d7d8d0a7654f1808f5abc0b090020ff16db9a3e169df156b6b5 - not in ps
	I0816 22:42:43.891707   19613 cri.go:116] container: {ID:bbeb739aeb370f1d3a9ce15770f608057fbe200efc98901efb8dab150cdc0530 Status:running}
	I0816 22:42:43.891719   19613 cri.go:118] skipping bbeb739aeb370f1d3a9ce15770f608057fbe200efc98901efb8dab150cdc0530 - not in ps
	I0816 22:42:43.891725   19613 cri.go:116] container: {ID:cb12185bdacc363d50b453691deee789994d7ad81788b6df833ae986a27ce50a Status:running}
	I0816 22:42:43.891737   19613 cri.go:116] container: {ID:cd35b65dbf056e2a246c7f0327763ece3bf3d994f02dd6b9afd6e03f3ea48b74 Status:running}
	I0816 22:42:43.891745   19613 cri.go:118] skipping cd35b65dbf056e2a246c7f0327763ece3bf3d994f02dd6b9afd6e03f3ea48b74 - not in ps
	I0816 22:42:43.891764   19613 cri.go:116] container: {ID:d3e8dbb1065a03fd63b9c5b4941d6a19dc60bcc0f70ee0d8d3ef12869d5d4a39 Status:running}
	I0816 22:42:43.891775   19613 cri.go:118] skipping d3e8dbb1065a03fd63b9c5b4941d6a19dc60bcc0f70ee0d8d3ef12869d5d4a39 - not in ps
	I0816 22:42:43.891782   19613 cri.go:116] container: {ID:e0bb5a872be5dee5a28ce3431d7728ab3b6d152dacd42c354ec65b03076e30d0 Status:running}
	I0816 22:42:43.891792   19613 cri.go:116] container: {ID:f7e4e0952db6f386f632efe7b9c51f084992c1ef01a488ff933fd58c8c0522f0 Status:running}
	I0816 22:42:43.891800   19613 cri.go:116] container: {ID:fdbc7e84435328821d7f1b6d1eee3ccce44a4cdfbaf20d5c843145707da3faac Status:running}
	I0816 22:42:43.891811   19613 cri.go:118] skipping fdbc7e84435328821d7f1b6d1eee3ccce44a4cdfbaf20d5c843145707da3faac - not in ps
	I0816 22:42:43.891866   19613 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5
	I0816 22:42:43.920784   19613 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5 64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476
	I0816 22:42:43.944710   19613 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5 64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:42:43Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0816 22:42:44.485107   19613 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:42:44.496539   19613 pause.go:50] kubelet running: false
	I0816 22:42:44.496603   19613 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:42:44.697976   19613 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:42:44.698086   19613 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:42:44.830991   19613 cri.go:76] found id: "89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2"
	I0816 22:42:44.831020   19613 cri.go:76] found id: "e0bb5a872be5dee5a28ce3431d7728ab3b6d152dacd42c354ec65b03076e30d0"
	I0816 22:42:44.831027   19613 cri.go:76] found id: "4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5"
	I0816 22:42:44.831031   19613 cri.go:76] found id: "1e3071c5b1c506432365e271b588aadbfd2eda919a23dcea85d60022046acfb4"
	I0816 22:42:44.831034   19613 cri.go:76] found id: "cb12185bdacc363d50b453691deee789994d7ad81788b6df833ae986a27ce50a"
	I0816 22:42:44.831039   19613 cri.go:76] found id: "a5a4025cb261585737124bf5b619554e77896d6c93458ee1baf7d4a451c195db"
	I0816 22:42:44.831042   19613 cri.go:76] found id: "f7e4e0952db6f386f632efe7b9c51f084992c1ef01a488ff933fd58c8c0522f0"
	I0816 22:42:44.831047   19613 cri.go:76] found id: "c265ff52803b42d9b0ff6036d0ca9fe4b655c9875474baa9a0a3f7255d230e17"
	I0816 22:42:44.831051   19613 cri.go:76] found id: "f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2"
	I0816 22:42:44.831058   19613 cri.go:76] found id: "64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476"
	I0816 22:42:44.831070   19613 cri.go:76] found id: ""
	I0816 22:42:44.831118   19613 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0816 22:42:44.880109   19613 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"10155044a33d1a2066f40ea92c55b4176c520a2b073f79bdb473c013652977c8","pid":4729,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10155044a33d1a2066f40ea92c55b4176c520a2b073f79bdb473c013652977c8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/10155044a33d1a2066f40ea92c55b4176c520a2b073f79bdb473c013652977c8/rootfs","created":"2021-08-16T22:41:58.076798055Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"10155044a33d1a2066f40ea92c55b4176c520a2b073f79bdb473c013652977c8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-20210816223156-6986_471096fb7e95e43637c465ace09ce732"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1e3071c5b1c506432365e271b588aadbfd2eda919a23dcea85d60022046acfb4","pid":4879,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1e3071c5b1c5064
32365e271b588aadbfd2eda919a23dcea85d60022046acfb4","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1e3071c5b1c506432365e271b588aadbfd2eda919a23dcea85d60022046acfb4/rootfs","created":"2021-08-16T22:41:59.3205451Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"fdbc7e84435328821d7f1b6d1eee3ccce44a4cdfbaf20d5c843145707da3faac"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"48a5117f3b82fb6acea368f95956203a0afe260e0d0e43a4af40c491a4706af6","pid":5814,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/48a5117f3b82fb6acea368f95956203a0afe260e0d0e43a4af40c491a4706af6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/48a5117f3b82fb6acea368f95956203a0afe260e0d0e43a4af40c491a4706af6/rootfs","created":"2021-08-16T22:42:27.462364827Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"48a5117f3b82fb6acea368f95956203a0afe260e0d0e43a4a
f40c491a4706af6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_c8a5ee3a-f5df-4eeb-92b1-443ecef04ea2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5","pid":5383,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5/rootfs","created":"2021-08-16T22:42:23.112093805Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a2d86485038673f4321d61bfd4d731f0a0746472b180a2f808d2b959884579d2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"53b71c03b338b7fc65416496d2ec076d6ec7bd85f0afbf283b147b58501941bb","pid":5869,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53b71c03b338
b7fc65416496d2ec076d6ec7bd85f0afbf283b147b58501941bb","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/53b71c03b338b7fc65416496d2ec076d6ec7bd85f0afbf283b147b58501941bb/rootfs","created":"2021-08-16T22:42:27.634037617Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"53b71c03b338b7fc65416496d2ec076d6ec7bd85f0afbf283b147b58501941bb","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-4vvg9_1295b024-f6d7-4bfa-b763-0c0bee43cb71"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476","pid":6211,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476/rootfs","created":"2021-08-16T22:42:30.610892993Z","annotations":{"io.kubernet
es.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"53b71c03b338b7fc65416496d2ec076d6ec7bd85f0afbf283b147b58501941bb"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2","pid":6023,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2/rootfs","created":"2021-08-16T22:42:28.668202973Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"48a5117f3b82fb6acea368f95956203a0afe260e0d0e43a4af40c491a4706af6"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a2d86485038673f4321d61bfd4d731f0a0746472b180a2f808d2b959884579d2","pid":5263,"status":"running","bundle":"/run/con
tainerd/io.containerd.runtime.v2.task/k8s.io/a2d86485038673f4321d61bfd4d731f0a0746472b180a2f808d2b959884579d2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2d86485038673f4321d61bfd4d731f0a0746472b180a2f808d2b959884579d2/rootfs","created":"2021-08-16T22:42:22.447007755Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a2d86485038673f4321d61bfd4d731f0a0746472b180a2f808d2b959884579d2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-jhqbx_edacd358-46da-4db4-a8db-098f6edefb76"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a5a4025cb261585737124bf5b619554e77896d6c93458ee1baf7d4a451c195db","pid":4823,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a5a4025cb261585737124bf5b619554e77896d6c93458ee1baf7d4a451c195db","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a5a4025cb261585737124bf5b619554e77896d6c93458ee1baf7d4a451c195db/rootfs","created":"2021-08-16T22:41:58.97596251Z",
"annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"bbeb739aeb370f1d3a9ce15770f608057fbe200efc98901efb8dab150cdc0530"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"a5ef942d9974feef7ad3431da2b0cfc95a8718a4ba6a9450740ecdf1febc6de3","pid":4736,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a5ef942d9974feef7ad3431da2b0cfc95a8718a4ba6a9450740ecdf1febc6de3","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a5ef942d9974feef7ad3431da2b0cfc95a8718a4ba6a9450740ecdf1febc6de3/rootfs","created":"2021-08-16T22:41:58.097260973Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a5ef942d9974feef7ad3431da2b0cfc95a8718a4ba6a9450740ecdf1febc6de3","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-20210816223156-6986_71ef3c733f161e4ee858bc77b08315e6"},"owner":"root"},{"ociVersion":"1.0.2-dev",
"id":"a62073928ebf1d7d8d0a7654f1808f5abc0b090020ff16db9a3e169df156b6b5","pid":5752,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a62073928ebf1d7d8d0a7654f1808f5abc0b090020ff16db9a3e169df156b6b5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/a62073928ebf1d7d8d0a7654f1808f5abc0b090020ff16db9a3e169df156b6b5/rootfs","created":"2021-08-16T22:42:27.233530817Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"a62073928ebf1d7d8d0a7654f1808f5abc0b090020ff16db9a3e169df156b6b5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-dfjww_7a744b20-6d7f-4001-a322-7e5615cbf15f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bbeb739aeb370f1d3a9ce15770f608057fbe200efc98901efb8dab150cdc0530","pid":4723,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bbeb739aeb370f1d3a9ce15770f608057fbe200efc98901efb8dab150cdc0530","rootfs":"/run/containerd/io.containerd.run
time.v2.task/k8s.io/bbeb739aeb370f1d3a9ce15770f608057fbe200efc98901efb8dab150cdc0530/rootfs","created":"2021-08-16T22:41:58.078701202Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bbeb739aeb370f1d3a9ce15770f608057fbe200efc98901efb8dab150cdc0530","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-20210816223156-6986_d7ad6ae9491e1156e0aa4b89ee3c2679"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cb12185bdacc363d50b453691deee789994d7ad81788b6df833ae986a27ce50a","pid":4863,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb12185bdacc363d50b453691deee789994d7ad81788b6df833ae986a27ce50a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cb12185bdacc363d50b453691deee789994d7ad81788b6df833ae986a27ce50a/rootfs","created":"2021-08-16T22:41:59.373428519Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.k
ubernetes.cri.sandbox-id":"10155044a33d1a2066f40ea92c55b4176c520a2b073f79bdb473c013652977c8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd35b65dbf056e2a246c7f0327763ece3bf3d994f02dd6b9afd6e03f3ea48b74","pid":5985,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd35b65dbf056e2a246c7f0327763ece3bf3d994f02dd6b9afd6e03f3ea48b74","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd35b65dbf056e2a246c7f0327763ece3bf3d994f02dd6b9afd6e03f3ea48b74/rootfs","created":"2021-08-16T22:42:28.301010012Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cd35b65dbf056e2a246c7f0327763ece3bf3d994f02dd6b9afd6e03f3ea48b74","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-d2v4k_d2a31ab1-304a-4179-9e46-8625b64d8dc4"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d3e8dbb1065a03fd63b9c5b4941d6a19dc60bcc0f70ee0d8d3ef12869d5d4a39","pid":5464,"status":"running","bundle":"/run/c
ontainerd/io.containerd.runtime.v2.task/k8s.io/d3e8dbb1065a03fd63b9c5b4941d6a19dc60bcc0f70ee0d8d3ef12869d5d4a39","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d3e8dbb1065a03fd63b9c5b4941d6a19dc60bcc0f70ee0d8d3ef12869d5d4a39/rootfs","created":"2021-08-16T22:42:23.457329732Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d3e8dbb1065a03fd63b9c5b4941d6a19dc60bcc0f70ee0d8d3ef12869d5d4a39","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-78fcd69978-9rlk6_83d3b042-5692-4c44-b6f2-65020120666e"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e0bb5a872be5dee5a28ce3431d7728ab3b6d152dacd42c354ec65b03076e30d0","pid":5613,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e0bb5a872be5dee5a28ce3431d7728ab3b6d152dacd42c354ec65b03076e30d0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e0bb5a872be5dee5a28ce3431d7728ab3b6d152dacd42c354ec65b03076e30d0/rootfs","created":"2021-08-16T22:42:24.7
11969859Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d3e8dbb1065a03fd63b9c5b4941d6a19dc60bcc0f70ee0d8d3ef12869d5d4a39"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"f7e4e0952db6f386f632efe7b9c51f084992c1ef01a488ff933fd58c8c0522f0","pid":4790,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7e4e0952db6f386f632efe7b9c51f084992c1ef01a488ff933fd58c8c0522f0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7e4e0952db6f386f632efe7b9c51f084992c1ef01a488ff933fd58c8c0522f0/rootfs","created":"2021-08-16T22:41:58.805729358Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"a5ef942d9974feef7ad3431da2b0cfc95a8718a4ba6a9450740ecdf1febc6de3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"fdbc7e84435328821d7f1b6d1eee3ccce44a4cdfbaf20d5c843145707da3faac","pid":4709,"status":"running
","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdbc7e84435328821d7f1b6d1eee3ccce44a4cdfbaf20d5c843145707da3faac","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/fdbc7e84435328821d7f1b6d1eee3ccce44a4cdfbaf20d5c843145707da3faac/rootfs","created":"2021-08-16T22:41:57.971119735Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"fdbc7e84435328821d7f1b6d1eee3ccce44a4cdfbaf20d5c843145707da3faac","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-20210816223156-6986_58ee2274ce0c14a00e4df1a99d5d675c"},"owner":"root"}]
	I0816 22:42:44.880405   19613 cri.go:113] list returned 18 containers
	I0816 22:42:44.880421   19613 cri.go:116] container: {ID:10155044a33d1a2066f40ea92c55b4176c520a2b073f79bdb473c013652977c8 Status:running}
	I0816 22:42:44.880432   19613 cri.go:118] skipping 10155044a33d1a2066f40ea92c55b4176c520a2b073f79bdb473c013652977c8 - not in ps
	I0816 22:42:44.880436   19613 cri.go:116] container: {ID:1e3071c5b1c506432365e271b588aadbfd2eda919a23dcea85d60022046acfb4 Status:paused}
	I0816 22:42:44.880444   19613 cri.go:122] skipping {1e3071c5b1c506432365e271b588aadbfd2eda919a23dcea85d60022046acfb4 paused}: state = "paused", want "running"
	I0816 22:42:44.880457   19613 cri.go:116] container: {ID:48a5117f3b82fb6acea368f95956203a0afe260e0d0e43a4af40c491a4706af6 Status:running}
	I0816 22:42:44.880462   19613 cri.go:118] skipping 48a5117f3b82fb6acea368f95956203a0afe260e0d0e43a4af40c491a4706af6 - not in ps
	I0816 22:42:44.880466   19613 cri.go:116] container: {ID:4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5 Status:paused}
	I0816 22:42:44.880471   19613 cri.go:122] skipping {4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5 paused}: state = "paused", want "running"
	I0816 22:42:44.880475   19613 cri.go:116] container: {ID:53b71c03b338b7fc65416496d2ec076d6ec7bd85f0afbf283b147b58501941bb Status:running}
	I0816 22:42:44.880479   19613 cri.go:118] skipping 53b71c03b338b7fc65416496d2ec076d6ec7bd85f0afbf283b147b58501941bb - not in ps
	I0816 22:42:44.880482   19613 cri.go:116] container: {ID:64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476 Status:running}
	I0816 22:42:44.880487   19613 cri.go:116] container: {ID:89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2 Status:running}
	I0816 22:42:44.880491   19613 cri.go:116] container: {ID:a2d86485038673f4321d61bfd4d731f0a0746472b180a2f808d2b959884579d2 Status:running}
	I0816 22:42:44.880495   19613 cri.go:118] skipping a2d86485038673f4321d61bfd4d731f0a0746472b180a2f808d2b959884579d2 - not in ps
	I0816 22:42:44.880498   19613 cri.go:116] container: {ID:a5a4025cb261585737124bf5b619554e77896d6c93458ee1baf7d4a451c195db Status:running}
	I0816 22:42:44.880502   19613 cri.go:116] container: {ID:a5ef942d9974feef7ad3431da2b0cfc95a8718a4ba6a9450740ecdf1febc6de3 Status:running}
	I0816 22:42:44.880506   19613 cri.go:118] skipping a5ef942d9974feef7ad3431da2b0cfc95a8718a4ba6a9450740ecdf1febc6de3 - not in ps
	I0816 22:42:44.880509   19613 cri.go:116] container: {ID:a62073928ebf1d7d8d0a7654f1808f5abc0b090020ff16db9a3e169df156b6b5 Status:running}
	I0816 22:42:44.880514   19613 cri.go:118] skipping a62073928ebf1d7d8d0a7654f1808f5abc0b090020ff16db9a3e169df156b6b5 - not in ps
	I0816 22:42:44.880523   19613 cri.go:116] container: {ID:bbeb739aeb370f1d3a9ce15770f608057fbe200efc98901efb8dab150cdc0530 Status:running}
	I0816 22:42:44.880527   19613 cri.go:118] skipping bbeb739aeb370f1d3a9ce15770f608057fbe200efc98901efb8dab150cdc0530 - not in ps
	I0816 22:42:44.880530   19613 cri.go:116] container: {ID:cb12185bdacc363d50b453691deee789994d7ad81788b6df833ae986a27ce50a Status:running}
	I0816 22:42:44.880536   19613 cri.go:116] container: {ID:cd35b65dbf056e2a246c7f0327763ece3bf3d994f02dd6b9afd6e03f3ea48b74 Status:running}
	I0816 22:42:44.880540   19613 cri.go:118] skipping cd35b65dbf056e2a246c7f0327763ece3bf3d994f02dd6b9afd6e03f3ea48b74 - not in ps
	I0816 22:42:44.880546   19613 cri.go:116] container: {ID:d3e8dbb1065a03fd63b9c5b4941d6a19dc60bcc0f70ee0d8d3ef12869d5d4a39 Status:running}
	I0816 22:42:44.880550   19613 cri.go:118] skipping d3e8dbb1065a03fd63b9c5b4941d6a19dc60bcc0f70ee0d8d3ef12869d5d4a39 - not in ps
	I0816 22:42:44.880553   19613 cri.go:116] container: {ID:e0bb5a872be5dee5a28ce3431d7728ab3b6d152dacd42c354ec65b03076e30d0 Status:running}
	I0816 22:42:44.880557   19613 cri.go:116] container: {ID:f7e4e0952db6f386f632efe7b9c51f084992c1ef01a488ff933fd58c8c0522f0 Status:running}
	I0816 22:42:44.880561   19613 cri.go:116] container: {ID:fdbc7e84435328821d7f1b6d1eee3ccce44a4cdfbaf20d5c843145707da3faac Status:running}
	I0816 22:42:44.880565   19613 cri.go:118] skipping fdbc7e84435328821d7f1b6d1eee3ccce44a4cdfbaf20d5c843145707da3faac - not in ps
	I0816 22:42:44.880606   19613 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476
	I0816 22:42:44.905901   19613 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476 89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2
	I0816 22:42:44.929249   19613 out.go:177] 
	W0816 22:42:44.929418   19613 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476 89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:42:44Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476 89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:42:44Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0816 22:42:44.929439   19613 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0816 22:42:44.935250   19613 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0816 22:42:44.936958   19613 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p no-preload-20210816223156-6986 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210816223156-6986 -n no-preload-20210816223156-6986
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210816223156-6986 -n no-preload-20210816223156-6986: exit status 2 (14.462936991s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:42:59.408361   19642 status.go:422] Error apiserver status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20210816223156-6986 logs -n 25
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p no-preload-20210816223156-6986 logs -n 25: exit status 110 (12.73084949s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p flannel-20210816222225-6986                    | flannel-20210816222225-6986                    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:31:06 UTC | Mon, 16 Aug 2021 22:33:13 UTC |
	|         | --memory=2048                                     |                                                |         |         |                               |                               |
	|         | --alsologtostderr                                 |                                                |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                                |         |         |                               |                               |
	|         | --cni=flannel --driver=kvm2                       |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	| ssh     | -p flannel-20210816222225-6986                    | flannel-20210816222225-6986                    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:33:18 UTC | Mon, 16 Aug 2021 22:33:18 UTC |
	|         | pgrep -a kubelet                                  |                                                |         |         |                               |                               |
	| start   | -p bridge-20210816222225-6986                     | bridge-20210816222225-6986                     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:31:52 UTC | Mon, 16 Aug 2021 22:33:19 UTC |
	|         | --memory=2048                                     |                                                |         |         |                               |                               |
	|         | --alsologtostderr                                 |                                                |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                                |         |         |                               |                               |
	|         | --cni=bridge --driver=kvm2                        |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	| ssh     | -p bridge-20210816222225-6986                     | bridge-20210816222225-6986                     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:33:20 UTC | Mon, 16 Aug 2021 22:33:20 UTC |
	|         | pgrep -a kubelet                                  |                                                |         |         |                               |                               |
	| delete  | -p flannel-20210816222225-6986                    | flannel-20210816222225-6986                    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:33:31 UTC | Mon, 16 Aug 2021 22:33:33 UTC |
	| delete  | -p bridge-20210816222225-6986                     | bridge-20210816222225-6986                     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:17 UTC | Mon, 16 Aug 2021 22:34:18 UTC |
	| delete  | -p                                                | disable-driver-mounts-20210816223418-6986      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:18 UTC | Mon, 16 Aug 2021 22:34:18 UTC |
	|         | disable-driver-mounts-20210816223418-6986         |                                                |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:31:54 UTC | Mon, 16 Aug 2021 22:34:34 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:44 UTC | Mon, 16 Aug 2021 22:34:45 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816223156-6986                 | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:31:56 UTC | Mon, 16 Aug 2021 22:35:04 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:33:33 UTC | Mon, 16 Aug 2021 22:35:08 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:16 UTC | Mon, 16 Aug 2021 22:35:17 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:19 UTC | Mon, 16 Aug 2021 22:35:20 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:18 UTC | Mon, 16 Aug 2021 22:35:42 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:51 UTC | Mon, 16 Aug 2021 22:35:52 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:45 UTC | Mon, 16 Aug 2021 22:36:18 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:18 UTC | Mon, 16 Aug 2021 22:36:18 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:17 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:20 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:52 UTC | Mon, 16 Aug 2021 22:37:25 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:37:25 UTC | Mon, 16 Aug 2021 22:37:25 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816223156-6986                 | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:42:31 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:42:42 UTC | Mon, 16 Aug 2021 22:42:42 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:37:25
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:37:25.306577   19204 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:37:25.306653   19204 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:37:25.306656   19204 out.go:311] Setting ErrFile to fd 2...
	I0816 22:37:25.306663   19204 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:37:25.307072   19204 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:37:25.307547   19204 out.go:305] Setting JSON to false
	I0816 22:37:25.351342   19204 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":4807,"bootTime":1629148638,"procs":188,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:37:25.351461   19204 start.go:121] virtualization: kvm guest
	I0816 22:37:25.353955   19204 out.go:177] * [default-k8s-different-port-20210816223418-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:37:25.355393   19204 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:37:25.354127   19204 notify.go:169] Checking for updates...
	I0816 22:37:25.356781   19204 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:37:25.358158   19204 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:37:25.364678   19204 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:37:25.365267   19204 config.go:177] Loaded profile config "default-k8s-different-port-20210816223418-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:37:25.365899   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:37:25.365956   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:37:25.381650   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46493
	I0816 22:37:25.382065   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:37:25.382798   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:37:25.382820   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:37:25.383330   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:37:25.383519   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:25.383721   19204 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:37:25.384192   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:37:25.384260   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:37:25.401082   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44899
	I0816 22:37:25.402507   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:37:25.403115   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:37:25.403179   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:37:25.403663   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:37:25.403903   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:25.439751   19204 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 22:37:25.439781   19204 start.go:278] selected driver: kvm2
	I0816 22:37:25.439788   19204 start.go:751] validating driver "kvm2" against &{Name:default-k8s-different-port-20210816223418-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210816223418-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kube
let:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:37:25.439905   19204 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 22:37:25.441282   19204 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:37:25.441453   19204 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 22:37:25.455762   19204 install.go:137] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2 version is 1.22.0
	I0816 22:37:25.456183   19204 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 22:37:25.456219   19204 cni.go:93] Creating CNI manager for ""
	I0816 22:37:25.456234   19204 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:37:25.456245   19204 start_flags.go:277] config:
	{Name:default-k8s-different-port-20210816223418-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-2021
0816223418-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop
:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:37:25.456384   19204 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:37:25.458420   19204 out.go:177] * Starting control plane node default-k8s-different-port-20210816223418-6986 in cluster default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.458447   19204 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:37:25.458480   19204 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0816 22:37:25.458495   19204 cache.go:56] Caching tarball of preloaded images
	I0816 22:37:25.458602   19204 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 22:37:25.458622   19204 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0816 22:37:25.458779   19204 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/config.json ...
	I0816 22:37:25.459003   19204 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:37:25.459033   19204 start.go:313] acquiring machines lock for default-k8s-different-port-20210816223418-6986: {Name:mk808edd60d1305a42bb85791729eff4573dbb15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 22:37:25.459101   19204 start.go:317] acquired machines lock for "default-k8s-different-port-20210816223418-6986" in 48.071µs
	I0816 22:37:25.459123   19204 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:37:25.459131   19204 fix.go:55] fixHost starting: 
	I0816 22:37:25.459569   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:37:25.459614   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:37:25.473634   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44349
	I0816 22:37:25.474153   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:37:25.474765   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:37:25.474786   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:37:25.475205   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:37:25.475409   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:25.475621   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetState
	I0816 22:37:25.479447   19204 fix.go:108] recreateIfNeeded on default-k8s-different-port-20210816223418-6986: state=Stopped err=<nil>
	I0816 22:37:25.479498   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	W0816 22:37:25.479660   19204 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:37:21.322104   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:21.822129   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:22.321669   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:22.821492   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:23.322452   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:23.822419   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:24.322141   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:24.821615   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:24.856062   18923 api_server.go:70] duration metric: took 8.045517198s to wait for apiserver process to appear ...
	I0816 22:37:24.856091   18923 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:37:24.856103   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:24.856734   18923 api_server.go:255] stopped: https://192.168.116.66:8443/healthz: Get "https://192.168.116.66:8443/healthz": dial tcp 192.168.116.66:8443: connect: connection refused
	I0816 22:37:25.357442   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:22.382628   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Getting to WaitForSSH function...
	I0816 22:37:22.388062   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:22.388472   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:22.388501   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:22.388736   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Using SSH client type: external
	I0816 22:37:22.388774   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa (-rw-------)
	I0816 22:37:22.388825   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.105.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:37:22.388851   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | About to run SSH command:
	I0816 22:37:22.388868   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | exit 0
	I0816 22:37:23.527862   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | SSH cmd err, output: <nil>: 
	I0816 22:37:23.528297   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetConfigRaw
	I0816 22:37:23.529175   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetIP
	I0816 22:37:23.535445   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.535831   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.535862   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.536325   18929 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/config.json ...
	I0816 22:37:23.536603   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:23.536838   18929 machine.go:88] provisioning docker machine ...
	I0816 22:37:23.536860   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:23.537120   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetMachineName
	I0816 22:37:23.537298   18929 buildroot.go:166] provisioning hostname "embed-certs-20210816223333-6986"
	I0816 22:37:23.537328   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetMachineName
	I0816 22:37:23.537497   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:23.543084   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.543520   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.543560   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.543770   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:23.543953   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:23.544122   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:23.544284   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:23.544470   18929 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:23.544676   18929 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.105.129 22 <nil> <nil>}
	I0816 22:37:23.544698   18929 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20210816223333-6986 && echo "embed-certs-20210816223333-6986" | sudo tee /etc/hostname
	I0816 22:37:23.682935   18929 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20210816223333-6986
	
	I0816 22:37:23.682982   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:23.689555   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.690034   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.690071   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.690297   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:23.690526   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:23.690738   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:23.690910   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:23.691116   18929 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:23.691321   18929 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.105.129 22 <nil> <nil>}
	I0816 22:37:23.691351   18929 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20210816223333-6986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210816223333-6986/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20210816223333-6986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:37:23.826330   18929 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:37:23.826357   18929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:37:23.826393   18929 buildroot.go:174] setting up certificates
	I0816 22:37:23.826403   18929 provision.go:83] configureAuth start
	I0816 22:37:23.826415   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetMachineName
	I0816 22:37:23.826673   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetIP
	I0816 22:37:23.832833   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.833221   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.833252   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.833505   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:23.839058   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.839437   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.839468   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.839721   18929 provision.go:138] copyHostCerts
	I0816 22:37:23.839785   18929 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:37:23.839801   18929 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:37:23.839858   18929 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:37:23.840010   18929 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:37:23.840023   18929 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:37:23.840050   18929 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:37:23.840148   18929 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:37:23.840160   18929 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:37:23.840181   18929 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
	I0816 22:37:23.840251   18929 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20210816223333-6986 san=[192.168.105.129 192.168.105.129 localhost 127.0.0.1 minikube embed-certs-20210816223333-6986]
	I0816 22:37:24.071276   18929 provision.go:172] copyRemoteCerts
	I0816 22:37:24.071347   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:37:24.071383   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.077584   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.078065   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.078133   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.078307   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.078500   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.078636   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.078743   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:37:24.168996   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:37:24.190581   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0816 22:37:24.211894   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:37:24.234970   18929 provision.go:86] duration metric: configureAuth took 408.533613ms
	I0816 22:37:24.235001   18929 buildroot.go:189] setting minikube options for container-runtime
	I0816 22:37:24.235282   18929 config.go:177] Loaded profile config "embed-certs-20210816223333-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:37:24.235303   18929 machine.go:91] provisioned docker machine in 698.450664ms
	I0816 22:37:24.235313   18929 start.go:267] post-start starting for "embed-certs-20210816223333-6986" (driver="kvm2")
	I0816 22:37:24.235321   18929 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:37:24.235352   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.235711   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:37:24.235748   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.242219   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.242647   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.242677   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.242968   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.243197   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.243376   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.243542   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:37:24.342244   18929 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:37:24.348430   18929 info.go:137] Remote host: Buildroot 2020.02.12
	I0816 22:37:24.348458   18929 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:37:24.348527   18929 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:37:24.348678   18929 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0816 22:37:24.348794   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:37:24.358370   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:37:24.378832   18929 start.go:270] post-start completed in 143.493882ms
	I0816 22:37:24.378891   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.379183   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.385172   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.385565   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.385596   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.385720   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.385936   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.386069   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.386238   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.386404   18929 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:24.386604   18929 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.105.129 22 <nil> <nil>}
	I0816 22:37:24.386621   18929 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0816 22:37:24.513150   18929 main.go:130] libmachine: SSH cmd err, output: <nil>: 1629153444.435910196
	
	I0816 22:37:24.513175   18929 fix.go:212] guest clock: 1629153444.435910196
	I0816 22:37:24.513185   18929 fix.go:225] Guest: 2021-08-16 22:37:24.435910196 +0000 UTC Remote: 2021-08-16 22:37:24.379164096 +0000 UTC m=+28.470229855 (delta=56.7461ms)
	I0816 22:37:24.513209   18929 fix.go:196] guest clock delta is within tolerance: 56.7461ms
	I0816 22:37:24.513220   18929 fix.go:57] fixHost completed within 14.813246061s
	I0816 22:37:24.513226   18929 start.go:80] releasing machines lock for "embed-certs-20210816223333-6986", held for 14.813280431s
	I0816 22:37:24.513267   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.513532   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetIP
	I0816 22:37:24.519703   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.520118   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.520149   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.520319   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.520528   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.521062   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.521300   18929 ssh_runner.go:149] Run: systemctl --version
	I0816 22:37:24.521326   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.521364   18929 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:37:24.521406   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.527844   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.527923   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.528257   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.528281   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.528308   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.528323   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.528556   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.528678   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.528724   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.528933   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.528943   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.529108   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.529179   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:37:24.529267   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:37:24.634682   18929 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:37:24.634891   18929 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:37:24.131199   18635 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:37:24.131267   18635 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:37:24.140028   18635 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:37:24.157600   18635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:37:24.171359   18635 system_pods.go:59] 8 kube-system pods found
	I0816 22:37:24.171398   18635 system_pods.go:61] "coredns-fb8b8dccf-qwcrg" [fd98f945-fee1-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171407   18635 system_pods.go:61] "etcd-old-k8s-version-20210816223154-6986" [1d77612e-fee2-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171414   18635 system_pods.go:61] "kube-apiserver-old-k8s-version-20210816223154-6986" [152107a2-fee2-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171420   18635 system_pods.go:61] "kube-controller-manager-old-k8s-version-20210816223154-6986" [8620a0da-fee2-11eb-b5b6-525400bf2371] Pending
	I0816 22:37:24.171426   18635 system_pods.go:61] "kube-proxy-nvb2s" [fdaa2b42-fee1-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171438   18635 system_pods.go:61] "kube-scheduler-old-k8s-version-20210816223154-6986" [1b1505e6-fee2-11eb-bb5b-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:37:24.171454   18635 system_pods.go:61] "metrics-server-8546d8b77b-gl6jr" [28801d4e-fee2-11eb-bb5b-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:37:24.171462   18635 system_pods.go:61] "storage-provisioner" [ff1e11f1-fee1-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171469   18635 system_pods.go:74] duration metric: took 13.840978ms to wait for pod list to return data ...
	I0816 22:37:24.171481   18635 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:37:24.176303   18635 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:37:24.176347   18635 node_conditions.go:123] node cpu capacity is 2
	I0816 22:37:24.176360   18635 node_conditions.go:105] duration metric: took 4.872863ms to run NodePressure ...
	I0816 22:37:24.176376   18635 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:25.292041   18635 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.115642082s)
	I0816 22:37:25.292077   18635 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:37:25.325547   18635 kubeadm.go:746] kubelet initialised
	I0816 22:37:25.325574   18635 kubeadm.go:747] duration metric: took 33.485813ms waiting for restarted kubelet to initialise ...
	I0816 22:37:25.325590   18635 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:37:25.351142   18635 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:27.387702   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:25.482074   19204 out.go:177] * Restarting existing kvm2 VM for "default-k8s-different-port-20210816223418-6986" ...
	I0816 22:37:25.482104   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Start
	I0816 22:37:25.482316   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Ensuring networks are active...
	I0816 22:37:25.484598   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Ensuring network default is active
	I0816 22:37:25.485014   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Ensuring network mk-default-k8s-different-port-20210816223418-6986 is active
	I0816 22:37:25.485452   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Getting domain xml...
	I0816 22:37:25.487765   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Creating domain...
	I0816 22:37:25.923048   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Waiting to get IP...
	I0816 22:37:25.924065   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.924660   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Found IP for machine: 192.168.50.186
	I0816 22:37:25.924682   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Reserving static IP address...
	I0816 22:37:25.924701   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has current primary IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.925155   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "default-k8s-different-port-20210816223418-6986", mac: "52:54:00:ed:d4:05", ip: "192.168.50.186"} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:34:34 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:25.925187   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | skip adding static IP to network mk-default-k8s-different-port-20210816223418-6986 - found existing host DHCP lease matching {name: "default-k8s-different-port-20210816223418-6986", mac: "52:54:00:ed:d4:05", ip: "192.168.50.186"}
	I0816 22:37:25.925202   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Reserved static IP address: 192.168.50.186
	I0816 22:37:25.925219   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Waiting for SSH to be available...
	I0816 22:37:25.925234   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Getting to WaitForSSH function...
	I0816 22:37:25.930369   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.930660   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:34:34 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:25.930705   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.930802   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Using SSH client type: external
	I0816 22:37:25.930842   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa (-rw-------)
	I0816 22:37:25.930888   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:37:25.931010   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | About to run SSH command:
	I0816 22:37:25.931033   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | exit 0
	I0816 22:37:30.356304   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:37:30.356337   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:37:30.357361   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:30.544479   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:30.544514   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:30.857809   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:30.866881   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:30.866920   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:28.652395   18929 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.017437883s)
	I0816 22:37:28.652577   18929 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0816 22:37:28.652647   18929 ssh_runner.go:149] Run: which lz4
	I0816 22:37:28.657345   18929 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0816 22:37:28.662555   18929 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0816 22:37:28.662584   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (928970367 bytes)
	I0816 22:37:31.357641   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:31.385946   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:31.385974   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:31.857651   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:31.878038   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:31.878070   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:32.357730   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:32.371926   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:32.371954   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:32.857204   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:32.867865   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 200:
	ok
	I0816 22:37:32.881085   18923 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:37:32.881113   18923 api_server.go:129] duration metric: took 8.025015474s to wait for apiserver health ...
	I0816 22:37:32.881124   18923 cni.go:93] Creating CNI manager for ""
	I0816 22:37:32.881132   18923 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:37:29.389763   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:31.391442   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:35.155848   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | SSH cmd err, output: exit status 255: 
	I0816 22:37:35.155882   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0816 22:37:35.155896   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | command : exit 0
	I0816 22:37:35.155905   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | err     : exit status 255
	I0816 22:37:35.155918   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | output  : 
	I0816 22:37:32.883184   18923 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:37:32.883268   18923 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:37:32.927942   18923 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:37:33.011939   18923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:37:33.043009   18923 system_pods.go:59] 8 kube-system pods found
	I0816 22:37:33.043056   18923 system_pods.go:61] "coredns-78fcd69978-nzf79" [a95afe1c-4f93-44a8-b669-b42c72f3500d] Running
	I0816 22:37:33.043064   18923 system_pods.go:61] "etcd-no-preload-20210816223156-6986" [fc40f0e0-16ef-4ba8-b5fd-17f4684d3a13] Running
	I0816 22:37:33.043076   18923 system_pods.go:61] "kube-apiserver-no-preload-20210816223156-6986" [f13df2c8-5aa8-49c3-89c0-b584ff8c62c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 22:37:33.043083   18923 system_pods.go:61] "kube-controller-manager-no-preload-20210816223156-6986" [8b866a1c-d283-4410-acbf-be2dbaa0f025] Running
	I0816 22:37:33.043094   18923 system_pods.go:61] "kube-proxy-64m6s" [fc5086fe-a671-4078-b76c-0c8f0656dca6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:37:33.043108   18923 system_pods.go:61] "kube-scheduler-no-preload-20210816223156-6986" [5db4c302-251a-47dc-90b9-424206ed445d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:37:33.043123   18923 system_pods.go:61] "metrics-server-7c784ccb57-44llk" [319102e5-661e-43bc-9c07-07463f6b1e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:37:33.043129   18923 system_pods.go:61] "storage-provisioner" [3da85640-a722-4ba1-a886-926bcaf81b8e] Running
	I0816 22:37:33.043140   18923 system_pods.go:74] duration metric: took 31.176037ms to wait for pod list to return data ...
	I0816 22:37:33.043149   18923 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:37:33.049500   18923 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:37:33.049531   18923 node_conditions.go:123] node cpu capacity is 2
	I0816 22:37:33.049544   18923 node_conditions.go:105] duration metric: took 6.385759ms to run NodePressure ...
	I0816 22:37:33.049562   18923 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:33.993434   18923 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:37:34.012191   18923 kubeadm.go:746] kubelet initialised
	I0816 22:37:34.012215   18923 kubeadm.go:747] duration metric: took 18.75429ms waiting for restarted kubelet to initialise ...
	I0816 22:37:34.012224   18923 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:37:34.033224   18923 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-nzf79" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:34.059145   18923 pod_ready.go:92] pod "coredns-78fcd69978-nzf79" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:34.059169   18923 pod_ready.go:81] duration metric: took 25.912051ms waiting for pod "coredns-78fcd69978-nzf79" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:34.059183   18923 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:32.660993   18929 containerd.go:546] Took 4.003687 seconds to copy over tarball
	I0816 22:37:32.661054   18929 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 22:37:33.892216   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:36.388385   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:38.156062   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Getting to WaitForSSH function...
	I0816 22:37:38.161988   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:38.162321   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:34:34 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:38.162379   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:38.162468   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Using SSH client type: external
	I0816 22:37:38.162499   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa (-rw-------)
	I0816 22:37:38.162538   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:37:38.162552   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | About to run SSH command:
	I0816 22:37:38.162570   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | exit 0
	I0816 22:37:36.102180   18923 pod_ready.go:102] pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:38.889153   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:41.402823   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | SSH cmd err, output: <nil>: 
	I0816 22:37:41.403283   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetConfigRaw
	I0816 22:37:41.404010   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetIP
	I0816 22:37:41.410017   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.410394   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.410432   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.410693   19204 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/config.json ...
	I0816 22:37:41.410926   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:41.411142   19204 machine.go:88] provisioning docker machine ...
	I0816 22:37:41.411167   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:41.411335   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetMachineName
	I0816 22:37:41.411477   19204 buildroot.go:166] provisioning hostname "default-k8s-different-port-20210816223418-6986"
	I0816 22:37:41.411499   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetMachineName
	I0816 22:37:41.411620   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:41.416760   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.417121   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.417154   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.417291   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:41.417487   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:41.417620   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:41.417769   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:41.417933   19204 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:41.418151   19204 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 22:37:41.418167   19204 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20210816223418-6986 && echo "default-k8s-different-port-20210816223418-6986" | sudo tee /etc/hostname
	I0816 22:37:41.560416   19204 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20210816223418-6986
	
	I0816 22:37:41.560449   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:41.566690   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.567028   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.567064   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.567351   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:41.567542   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:41.567703   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:41.567827   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:41.567996   19204 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:41.568193   19204 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 22:37:41.568221   19204 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20210816223418-6986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20210816223418-6986/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20210816223418-6986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:37:41.743484   19204 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:37:41.743518   19204 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:37:41.743559   19204 buildroot.go:174] setting up certificates
	I0816 22:37:41.743576   19204 provision.go:83] configureAuth start
	I0816 22:37:41.743593   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetMachineName
	I0816 22:37:41.743895   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetIP
	I0816 22:37:41.750014   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.750423   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.750467   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.750809   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:41.756158   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.756536   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.756569   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.756717   19204 provision.go:138] copyHostCerts
	I0816 22:37:41.756789   19204 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:37:41.756799   19204 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:37:41.756862   19204 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:37:41.756962   19204 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:37:41.756972   19204 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:37:41.756994   19204 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:37:41.757071   19204 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:37:41.757082   19204 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:37:41.757102   19204 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
	I0816 22:37:41.757156   19204 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20210816223418-6986 san=[192.168.50.186 192.168.50.186 localhost 127.0.0.1 minikube default-k8s-different-port-20210816223418-6986]
	I0816 22:37:42.356131   19204 provision.go:172] copyRemoteCerts
	I0816 22:37:42.356205   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:37:42.356250   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.362214   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.362513   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.362547   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.362780   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.362992   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.363219   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.363363   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:37:42.482862   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:37:42.512838   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0816 22:37:42.540047   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:37:42.568047   19204 provision.go:86] duration metric: configureAuth took 824.454088ms
	I0816 22:37:42.568077   19204 buildroot.go:189] setting minikube options for container-runtime
	I0816 22:37:42.568300   19204 config.go:177] Loaded profile config "default-k8s-different-port-20210816223418-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:37:42.568315   19204 machine.go:91] provisioned docker machine in 1.157156536s
	I0816 22:37:42.568324   19204 start.go:267] post-start starting for "default-k8s-different-port-20210816223418-6986" (driver="kvm2")
	I0816 22:37:42.568333   19204 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:37:42.568368   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.568715   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:37:42.568749   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.574488   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.574891   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.574928   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.575140   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.575339   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.575523   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.575710   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:37:42.676578   19204 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:37:42.682148   19204 info.go:137] Remote host: Buildroot 2020.02.12
	I0816 22:37:42.682181   19204 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:37:42.682247   19204 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:37:42.682409   19204 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0816 22:37:42.682558   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:37:42.691519   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:37:42.711453   19204 start.go:270] post-start completed in 143.110809ms
	I0816 22:37:42.711496   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.711732   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.718125   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.718511   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.718547   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.718832   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.719063   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.719246   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.719404   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.719588   19204 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:42.719762   19204 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 22:37:42.719775   19204 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0816 22:37:42.864591   19204 main.go:130] libmachine: SSH cmd err, output: <nil>: 1629153462.785763979
	
	I0816 22:37:42.864617   19204 fix.go:212] guest clock: 1629153462.785763979
	I0816 22:37:42.864627   19204 fix.go:225] Guest: 2021-08-16 22:37:42.785763979 +0000 UTC Remote: 2021-08-16 22:37:42.711713193 +0000 UTC m=+17.455762277 (delta=74.050786ms)
	I0816 22:37:42.864651   19204 fix.go:196] guest clock delta is within tolerance: 74.050786ms
	I0816 22:37:42.864660   19204 fix.go:57] fixHost completed within 17.405528602s
	I0816 22:37:42.864666   19204 start.go:80] releasing machines lock for "default-k8s-different-port-20210816223418-6986", held for 17.405551891s
	I0816 22:37:42.864711   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.864961   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetIP
	I0816 22:37:42.871077   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.871460   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.871504   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.871781   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.871990   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.872747   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.873035   19204 ssh_runner.go:149] Run: systemctl --version
	I0816 22:37:42.873067   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.873387   19204 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:37:42.873431   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.881178   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.881737   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.882041   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.882067   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.882095   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.882114   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.882432   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.882476   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.882624   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.882654   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.882754   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.882821   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.882852   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:37:42.882932   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:37:42.983824   19204 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:37:42.983945   19204 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:37:41.792417   18923 pod_ready.go:102] pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:42.110388   18923 pod_ready.go:92] pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:42.110425   18923 pod_ready.go:81] duration metric: took 8.051231395s waiting for pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.110443   18923 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.128769   18923 pod_ready.go:92] pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:42.128789   18923 pod_ready.go:81] duration metric: took 18.337432ms waiting for pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.128804   18923 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.137520   18923 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:42.137541   18923 pod_ready.go:81] duration metric: took 8.728281ms waiting for pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.137554   18923 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-64m6s" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.158798   18923 pod_ready.go:92] pod "kube-proxy-64m6s" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:42.158877   18923 pod_ready.go:81] duration metric: took 21.313805ms waiting for pod "kube-proxy-64m6s" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.158908   18923 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:44.512973   18923 pod_ready.go:102] pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:45.697026   18923 pod_ready.go:92] pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:45.697054   18923 pod_ready.go:81] duration metric: took 3.538123235s waiting for pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:45.697067   18923 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:44.369712   18929 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (11.708626678s)
	I0816 22:37:44.369752   18929 containerd.go:553] Took 11.708733 seconds t extract the tarball
	I0816 22:37:44.369766   18929 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0816 22:37:44.433232   18929 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:37:44.586357   18929 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:37:44.635654   18929 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0816 22:37:44.682553   18929 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0816 22:37:44.697822   18929 docker.go:153] disabling docker service ...
	I0816 22:37:44.697882   18929 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:37:44.709238   18929 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:37:44.720469   18929 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:37:44.857666   18929 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:37:44.991672   18929 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:37:45.005773   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:37:45.020903   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0816 22:37:45.035818   18929 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:37:45.045388   18929 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:37:45.045444   18929 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:37:45.065836   18929 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:37:45.073649   18929 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:37:45.210250   18929 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:37:45.536389   18929 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0816 22:37:45.536468   18929 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:37:45.543940   18929 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0816 22:37:46.648822   18929 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:37:46.654589   18929 start.go:413] Will wait 60s for crictl version
	I0816 22:37:46.654654   18929 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:37:46.687975   18929 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0816 22:37:46.688041   18929 ssh_runner.go:149] Run: containerd --version
	I0816 22:37:46.717960   18929 ssh_runner.go:149] Run: containerd --version
	I0816 22:37:43.671220   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:45.887022   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:47.896514   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:46.994449   19204 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.010481954s)
	I0816 22:37:46.994588   19204 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0816 22:37:46.994677   19204 ssh_runner.go:149] Run: which lz4
	I0816 22:37:46.999431   19204 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0816 22:37:47.004309   19204 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0816 22:37:47.004338   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (928970367 bytes)
	I0816 22:37:47.723452   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:49.727582   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:46.750218   18929 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0816 22:37:46.750266   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetIP
	I0816 22:37:46.755631   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:46.756018   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:46.756051   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:46.756195   18929 ssh_runner.go:149] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0816 22:37:46.760434   18929 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:37:46.770865   18929 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:37:46.770913   18929 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:37:46.804122   18929 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:37:46.804147   18929 containerd.go:517] Images already preloaded, skipping extraction
	I0816 22:37:46.804200   18929 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:37:46.836132   18929 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:37:46.836154   18929 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:37:46.836213   18929 ssh_runner.go:149] Run: sudo crictl info
	I0816 22:37:46.870224   18929 cni.go:93] Creating CNI manager for ""
	I0816 22:37:46.870256   18929 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:37:46.870269   18929 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:37:46.870282   18929 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.129 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20210816223333-6986 NodeName:embed-certs-20210816223333-6986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.105.129 CgroupDriver:cgroupf
s ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:37:46.870401   18929 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20210816223333-6986"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:37:46.870482   18929 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20210816223333-6986 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.105.129 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210816223333-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:37:46.870540   18929 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0816 22:37:46.878703   18929 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:37:46.878775   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:37:46.887763   18929 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (548 bytes)
	I0816 22:37:46.900548   18929 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 22:37:46.911899   18929 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2090 bytes)
	I0816 22:37:46.925412   18929 ssh_runner.go:149] Run: grep 192.168.105.129	control-plane.minikube.internal$ /etc/hosts
	I0816 22:37:46.929442   18929 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:37:46.939989   18929 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986 for IP: 192.168.105.129
	I0816 22:37:46.940054   18929 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:37:46.940073   18929 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:37:46.940143   18929 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/client.key
	I0816 22:37:46.940182   18929 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/apiserver.key.ff3abd74
	I0816 22:37:46.940203   18929 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/proxy-client.key
	I0816 22:37:46.940311   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem (1338 bytes)
	W0816 22:37:46.940364   18929 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
	I0816 22:37:46.940374   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 22:37:46.940398   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:37:46.940419   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:37:46.940453   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1675 bytes)
	I0816 22:37:46.940501   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:37:46.941607   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:37:46.959921   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 22:37:46.977073   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:37:46.995032   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 22:37:47.016388   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:37:47.036886   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 22:37:47.056736   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:37:47.076945   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 22:37:47.096512   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
	I0816 22:37:47.117888   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
	I0816 22:37:47.137952   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:37:47.159313   18929 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:37:47.173334   18929 ssh_runner.go:149] Run: openssl version
	I0816 22:37:47.179650   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:37:47.191486   18929 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:37:47.196524   18929 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:42 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:37:47.196589   18929 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:37:47.204162   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:37:47.214626   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
	I0816 22:37:47.226391   18929 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6986.pem
	I0816 22:37:47.234494   18929 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:49 /usr/share/ca-certificates/6986.pem
	I0816 22:37:47.234558   18929 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
	I0816 22:37:47.242705   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
	I0816 22:37:47.253305   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
	I0816 22:37:47.263502   18929 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/69862.pem
	I0816 22:37:47.268803   18929 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:49 /usr/share/ca-certificates/69862.pem
	I0816 22:37:47.268865   18929 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
	I0816 22:37:47.274964   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:37:47.283354   18929 kubeadm.go:390] StartCluster: {Name:embed-certs-20210816223333-6986 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3
ClusterName:embed-certs-20210816223333-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.129 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] Start
HostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:37:47.283503   18929 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0816 22:37:47.283565   18929 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:37:47.325446   18929 cri.go:76] found id: ""
	I0816 22:37:47.325557   18929 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:37:47.335659   18929 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:37:47.335682   18929 kubeadm.go:600] restartCluster start
	I0816 22:37:47.335733   18929 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:37:47.346292   18929 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:47.347565   18929 kubeconfig.go:117] verify returned: extract IP: "embed-certs-20210816223333-6986" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:37:47.348014   18929 kubeconfig.go:128] "embed-certs-20210816223333-6986" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:37:47.348788   18929 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:37:47.351634   18929 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:37:47.361663   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:47.361718   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:47.374579   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:47.574973   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:47.575059   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:47.589172   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:47.775434   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:47.775507   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:47.788957   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:47.975270   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:47.975360   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:47.989460   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.175680   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.175758   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.191429   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.375697   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.375790   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.386436   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.574665   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.574762   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.589082   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.775443   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.775512   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.791358   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.975634   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.975720   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.988259   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.175437   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.175544   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.190342   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.375596   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.375683   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.389601   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.574808   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.574892   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.585369   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.775000   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.775066   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.787982   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.975134   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.975231   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.986392   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:50.175658   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:50.175750   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:50.188143   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:50.375418   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:50.375514   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:50.387182   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:50.387201   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:50.387249   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:50.397435   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:50.397461   18929 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:37:50.397471   18929 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:37:50.397485   18929 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:37:50.397549   18929 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:37:50.439348   18929 cri.go:76] found id: ""
	I0816 22:37:50.439419   18929 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:37:50.459652   18929 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:37:50.469766   18929 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:37:50.469836   18929 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:37:50.479399   18929 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:37:50.479422   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:50.872420   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:50.387080   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:52.388399   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:53.358602   19204 containerd.go:546] Took 6.359210 seconds to copy over tarball
	I0816 22:37:53.358725   19204 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 22:37:51.735229   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:54.223000   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:52.412541   18929 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.540081052s)
	I0816 22:37:52.412575   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:52.718154   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:52.886875   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:53.025017   18929 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:37:53.025085   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:53.540988   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:54.040437   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:54.541392   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:55.040418   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:55.540381   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:54.887899   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:56.229434   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:58.302035   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:00.733041   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:56.040801   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:56.540669   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:57.040354   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:57.540386   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:58.040333   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:58.540400   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:59.040772   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:59.540444   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:00.041274   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:00.540645   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:00.741760   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:02.887487   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:03.393238   19204 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (10.034485098s)
	I0816 22:38:03.393270   19204 containerd.go:553] Took 10.034612 seconds t extract the tarball
	I0816 22:38:03.393282   19204 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0816 22:38:03.459021   19204 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:38:03.599477   19204 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:38:03.656046   19204 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0816 22:38:03.843112   19204 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0816 22:38:03.858574   19204 docker.go:153] disabling docker service ...
	I0816 22:38:03.858632   19204 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:38:03.872784   19204 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:38:03.886816   19204 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:38:04.029472   19204 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:38:04.164998   19204 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:38:04.176395   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:38:04.190579   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0816 22:38:04.204338   19204 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:38:04.211355   19204 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:38:04.211415   19204 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:38:04.229181   19204 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:38:04.236487   19204 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:38:04.368079   19204 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:38:03.226580   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:05.846484   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:01.040586   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:01.541229   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:02.041014   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:02.540773   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:03.040804   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:03.540654   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:04.041158   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:04.540403   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:05.041212   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:05.540477   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:05.871071   19204 ssh_runner.go:189] Completed: sudo systemctl restart containerd: (1.502953255s)
	I0816 22:38:05.871107   19204 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0816 22:38:05.871162   19204 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:38:05.876672   19204 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0816 22:38:06.981936   19204 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:38:06.987477   19204 start.go:413] Will wait 60s for crictl version
	I0816 22:38:06.987542   19204 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:38:07.019404   19204 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0816 22:38:07.019460   19204 ssh_runner.go:149] Run: containerd --version
	I0816 22:38:07.056241   19204 ssh_runner.go:149] Run: containerd --version
	I0816 22:38:05.841456   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:07.888564   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:07.088137   19204 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0816 22:38:07.088183   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetIP
	I0816 22:38:07.093462   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:38:07.093796   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:38:07.093832   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:38:07.093973   19204 ssh_runner.go:149] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 22:38:07.098921   19204 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:38:07.109221   19204 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:38:07.109293   19204 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:38:07.143575   19204 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:38:07.143601   19204 containerd.go:517] Images already preloaded, skipping extraction
	I0816 22:38:07.143659   19204 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:38:07.174105   19204 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:38:07.174129   19204 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:38:07.174182   19204 ssh_runner.go:149] Run: sudo crictl info
	I0816 22:38:07.212980   19204 cni.go:93] Creating CNI manager for ""
	I0816 22:38:07.213012   19204 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:38:07.213028   19204 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:38:07.213043   19204 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.186 APIServerPort:8444 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20210816223418-6986 NodeName:default-k8s-different-port-20210816223418-6986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.
50.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:38:07.213191   19204 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.186
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20210816223418-6986"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:38:07.213279   19204 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20210816223418-6986 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.186 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210816223418-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0816 22:38:07.213332   19204 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0816 22:38:07.222054   19204 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:38:07.222139   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:38:07.230063   19204 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (562 bytes)
	I0816 22:38:07.244461   19204 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 22:38:07.259892   19204 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0816 22:38:07.274883   19204 ssh_runner.go:149] Run: grep 192.168.50.186	control-plane.minikube.internal$ /etc/hosts
	I0816 22:38:07.280261   19204 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:38:07.293265   19204 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986 for IP: 192.168.50.186
	I0816 22:38:07.293314   19204 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:38:07.293333   19204 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:38:07.293384   19204 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/client.key
	I0816 22:38:07.293423   19204 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/apiserver.key.c5cc0a12
	I0816 22:38:07.293458   19204 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/proxy-client.key
	I0816 22:38:07.293569   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem (1338 bytes)
	W0816 22:38:07.293608   19204 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
	I0816 22:38:07.293618   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 22:38:07.293643   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:38:07.293668   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:38:07.293692   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1675 bytes)
	I0816 22:38:07.293738   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:38:07.294686   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:38:07.314730   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 22:38:07.332358   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:38:07.351920   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 22:38:07.369849   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:38:07.388099   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 22:38:07.406297   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:38:07.425998   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 22:38:07.443687   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:38:07.460832   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
	I0816 22:38:07.481210   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
	I0816 22:38:07.501717   19204 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:38:07.514903   19204 ssh_runner.go:149] Run: openssl version
	I0816 22:38:07.520949   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:38:07.531264   19204 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:38:07.536846   19204 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:42 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:38:07.536898   19204 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:38:07.543551   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:38:07.553322   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
	I0816 22:38:07.563414   19204 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6986.pem
	I0816 22:38:07.568579   19204 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:49 /usr/share/ca-certificates/6986.pem
	I0816 22:38:07.568631   19204 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
	I0816 22:38:07.574828   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
	I0816 22:38:07.582849   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
	I0816 22:38:07.591254   19204 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/69862.pem
	I0816 22:38:07.595981   19204 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:49 /usr/share/ca-certificates/69862.pem
	I0816 22:38:07.596044   19204 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
	I0816 22:38:07.602206   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:38:07.611191   19204 kubeadm.go:390] StartCluster: {Name:default-k8s-different-port-20210816223418-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.21.3 ClusterName:default-k8s-different-port-20210816223418-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_read
y:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:38:07.611272   19204 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0816 22:38:07.611319   19204 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:38:07.643146   19204 cri.go:76] found id: ""
	I0816 22:38:07.643226   19204 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:38:07.650886   19204 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:38:07.650919   19204 kubeadm.go:600] restartCluster start
	I0816 22:38:07.650971   19204 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:38:07.658653   19204 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:07.659605   19204 kubeconfig.go:117] verify returned: extract IP: "default-k8s-different-port-20210816223418-6986" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:38:07.660046   19204 kubeconfig.go:128] "default-k8s-different-port-20210816223418-6986" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:38:07.661820   19204 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:38:07.664797   19204 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:38:07.672378   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:07.672416   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:07.682197   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:07.882615   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:07.882689   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:07.893628   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.082995   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.083063   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.092764   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.283037   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.283112   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.293325   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.482586   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.482681   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.493502   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.682844   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.682915   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.693201   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.882416   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.882491   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.892118   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.082359   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.082457   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.092165   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.282385   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.282459   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.291528   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.482860   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.482930   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.493037   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.682335   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.682408   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.691945   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.883133   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.883193   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.892794   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.083140   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.083233   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.092308   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.223670   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:10.742112   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:06.041308   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:06.540690   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:07.041155   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:07.540839   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:08.040793   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:08.541292   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:09.041388   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:09.540943   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:10.041377   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:10.541237   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:10.386476   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:12.889815   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:10.282796   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.282889   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.292190   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.482261   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.482330   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.491729   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.683104   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.683186   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.693060   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.693079   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.693121   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.701893   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.701916   19204 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:38:10.701925   19204 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:38:10.701938   19204 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:38:10.701989   19204 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:38:10.740433   19204 cri.go:76] found id: ""
	I0816 22:38:10.740501   19204 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:38:10.756485   19204 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:38:10.765450   19204 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:38:10.765507   19204 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:38:10.772477   19204 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:38:10.772499   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:11.017384   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:12.671111   19204 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.653686174s)
	I0816 22:38:12.671155   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:12.947393   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:13.086256   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:13.215447   19204 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:38:13.215508   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:13.731105   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.231119   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.731093   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:15.231319   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:13.224797   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:15.723341   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:11.040800   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:11.540697   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:12.040673   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:12.541181   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:13.041152   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:13.541025   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.041183   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.541230   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.551768   18929 api_server.go:70] duration metric: took 21.526753133s to wait for apiserver process to appear ...
	I0816 22:38:14.551790   18929 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:38:14.551800   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:38:15.386344   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:16.395588   18635 pod_ready.go:92] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.395621   18635 pod_ready.go:81] duration metric: took 51.044447203s waiting for pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.395634   18635 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.408068   18635 pod_ready.go:92] pod "etcd-old-k8s-version-20210816223154-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.408086   18635 pod_ready.go:81] duration metric: took 12.443476ms waiting for pod "etcd-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.408096   18635 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.414488   18635 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20210816223154-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.414507   18635 pod_ready.go:81] duration metric: took 6.402316ms waiting for pod "kube-apiserver-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.414521   18635 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.420281   18635 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20210816223154-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.420300   18635 pod_ready.go:81] duration metric: took 5.769412ms waiting for pod "kube-controller-manager-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.420313   18635 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nvb2s" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.425411   18635 pod_ready.go:92] pod "kube-proxy-nvb2s" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.425430   18635 pod_ready.go:81] duration metric: took 5.109715ms waiting for pod "kube-proxy-nvb2s" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.425440   18635 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.784339   18635 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20210816223154-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.784360   18635 pod_ready.go:81] duration metric: took 358.911908ms waiting for pod "kube-scheduler-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.784371   18635 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:18.553150   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:38:18.553194   18929 api_server.go:101] status: https://192.168.105.129:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:38:19.053887   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:38:19.071151   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:38:19.071179   18929 api_server.go:101] status: https://192.168.105.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:38:19.553619   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:38:19.561382   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:38:19.561406   18929 api_server.go:101] status: https://192.168.105.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:38:20.053341   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:38:20.061527   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 200:
	ok
	I0816 22:38:20.069537   18929 api_server.go:139] control plane version: v1.21.3
	I0816 22:38:20.069560   18929 api_server.go:129] duration metric: took 5.517764917s to wait for apiserver health ...
	I0816 22:38:20.069572   18929 cni.go:93] Creating CNI manager for ""
	I0816 22:38:20.069581   18929 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:38:15.731207   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:16.231247   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:16.731268   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:17.230730   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:17.730956   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:18.231458   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:18.730950   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:19.230879   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:19.730819   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:20.230563   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:20.243853   19204 api_server.go:70] duration metric: took 7.028407985s to wait for apiserver process to appear ...
	I0816 22:38:20.243876   19204 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:38:20.243887   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:18.225200   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:20.243220   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:20.071659   18929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:38:20.071738   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:38:20.084719   18929 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:38:20.113939   18929 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:38:20.132494   18929 system_pods.go:59] 8 kube-system pods found
	I0816 22:38:20.132598   18929 system_pods.go:61] "coredns-558bd4d5db-jq6bb" [c088e8ae-638c-449f-b206-10b016f707f4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:38:20.132622   18929 system_pods.go:61] "etcd-embed-certs-20210816223333-6986" [350ff095-f45d-4c87-a10a-cbb9a0cc4358] Running
	I0816 22:38:20.132654   18929 system_pods.go:61] "kube-apiserver-embed-certs-20210816223333-6986" [7ee444e9-f198-4d9b-985e-b190a2e5e369] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 22:38:20.132667   18929 system_pods.go:61] "kube-controller-manager-embed-certs-20210816223333-6986" [c71ecc69-d617-48d3-a162-46d27aedd0a9] Running
	I0816 22:38:20.132676   18929 system_pods.go:61] "kube-proxy-8h6xz" [7cbdd516-13c5-469b-8e60-7dc0babb699a] Running
	I0816 22:38:20.132688   18929 system_pods.go:61] "kube-scheduler-embed-certs-20210816223333-6986" [4ebf165e-13c3-4f42-a75f-4301ea2f6c78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:38:20.132698   18929 system_pods.go:61] "metrics-server-7c784ccb57-9xpsr" [6b6283cf-0668-48a4-9f21-61cb5723f0b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:38:20.132704   18929 system_pods.go:61] "storage-provisioner" [7893460e-43c2-4606-8b56-c2ed9ac764bd] Running
	I0816 22:38:20.132712   18929 system_pods.go:74] duration metric: took 18.749758ms to wait for pod list to return data ...
	I0816 22:38:20.132721   18929 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:38:20.138564   18929 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:38:20.138614   18929 node_conditions.go:123] node cpu capacity is 2
	I0816 22:38:20.138632   18929 node_conditions.go:105] duration metric: took 5.904026ms to run NodePressure ...
	I0816 22:38:20.138651   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:20.830223   18929 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:38:20.835364   18929 kubeadm.go:746] kubelet initialised
	I0816 22:38:20.835384   18929 kubeadm.go:747] duration metric: took 5.139864ms waiting for restarted kubelet to initialise ...
	I0816 22:38:20.835392   18929 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:38:20.841354   18929 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:19.191797   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:21.192936   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:25.244953   19204 api_server.go:255] stopped: https://192.168.50.186:8444/healthz: Get "https://192.168.50.186:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 22:38:22.723414   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:25.223163   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:22.860677   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:24.863916   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:23.690499   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:25.690995   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:27.691820   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:25.746028   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:27.721976   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:29.722107   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:27.361030   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:29.361218   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:30.190894   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:32.192100   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:30.746969   19204 api_server.go:255] stopped: https://192.168.50.186:8444/healthz: Get "https://192.168.50.186:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 22:38:31.245148   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:32.224115   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:34.723153   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:31.859919   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:33.863770   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:34.691552   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:36.693980   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:36.246218   19204 api_server.go:255] stopped: https://192.168.50.186:8444/healthz: Get "https://192.168.50.186:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 22:38:36.745853   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:37.223369   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:39.239225   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:36.360668   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:38.361218   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:40.871372   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:41.344967   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:38:41.344991   19204 api_server.go:101] status: https://192.168.50.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:38:41.745061   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:41.754168   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:38:41.754195   19204 api_server.go:101] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:38:42.245898   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:42.258458   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:38:42.258509   19204 api_server.go:101] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:38:42.745610   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:42.756658   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 200:
	ok
	I0816 22:38:42.770293   19204 api_server.go:139] control plane version: v1.21.3
	I0816 22:38:42.770321   19204 api_server.go:129] duration metric: took 22.526438535s to wait for apiserver health ...
	I0816 22:38:42.770332   19204 cni.go:93] Creating CNI manager for ""
	I0816 22:38:42.770339   19204 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:38:39.192176   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:41.198006   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:42.772377   19204 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:38:42.772434   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:38:42.788298   19204 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:38:42.809709   19204 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:38:42.824805   19204 system_pods.go:59] 8 kube-system pods found
	I0816 22:38:42.824843   19204 system_pods.go:61] "coredns-558bd4d5db-ssfkf" [eb30728b-0eae-41d8-90bc-d8de8c6b4caa] Running
	I0816 22:38:42.824857   19204 system_pods.go:61] "etcd-default-k8s-different-port-20210816223418-6986" [825a27d4-d8dc-4dbe-a724-ac2e59508c5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 22:38:42.824865   19204 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210816223418-6986" [a3383733-5a20-4b5a-aeab-df3e61e37d94] Running
	I0816 22:38:42.824882   19204 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210816223418-6986" [42f433b1-271b-41a6-96a0-ab85fe6ba28e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 22:38:42.824896   19204 system_pods.go:61] "kube-proxy-psg4t" [98ca6629-d521-445d-99c2-b7e7ddf3b973] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:38:42.824905   19204 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210816223418-6986" [bef50322-5dc7-4680-b867-e17eb23298a8] Running
	I0816 22:38:42.824919   19204 system_pods.go:61] "metrics-server-7c784ccb57-rmrr6" [325f4892-3ae2-4a08-bc13-22c74c15c362] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:38:42.824929   19204 system_pods.go:61] "storage-provisioner" [89aadc6c-b5b0-47eb-b6e0-0f5fb78b1689] Running
	I0816 22:38:42.824936   19204 system_pods.go:74] duration metric: took 15.209253ms to wait for pod list to return data ...
	I0816 22:38:42.824947   19204 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:38:42.835095   19204 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:38:42.835144   19204 node_conditions.go:123] node cpu capacity is 2
	I0816 22:38:42.835160   19204 node_conditions.go:105] duration metric: took 10.206913ms to run NodePressure ...
	I0816 22:38:42.835178   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:43.431532   19204 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:38:43.443469   19204 kubeadm.go:746] kubelet initialised
	I0816 22:38:43.443543   19204 kubeadm.go:747] duration metric: took 11.973692ms waiting for restarted kubelet to initialise ...
	I0816 22:38:43.443567   19204 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:38:43.467119   19204 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-ssfkf" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:43.487197   19204 pod_ready.go:92] pod "coredns-558bd4d5db-ssfkf" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:43.487224   19204 pod_ready.go:81] duration metric: took 20.062907ms waiting for pod "coredns-558bd4d5db-ssfkf" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:43.487236   19204 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:41.723036   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:43.727234   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:42.883394   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:45.360217   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:43.692394   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:46.195001   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:45.513670   19204 pod_ready.go:102] pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:47.520170   19204 pod_ready.go:102] pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:50.012608   19204 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:50.012643   19204 pod_ready.go:81] duration metric: took 6.525398312s waiting for pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:50.012653   19204 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:50.018616   19204 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:50.018632   19204 pod_ready.go:81] duration metric: took 5.971078ms waiting for pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:50.018641   19204 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:46.223793   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:48.231527   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:50.721902   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:47.864929   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:50.359955   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:48.690708   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:50.691511   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:53.191133   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:52.030327   19204 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:54.530276   19204 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:52.723113   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:54.730785   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:52.865142   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:55.362902   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:55.692797   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:58.193231   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:56.537583   19204 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:59.032998   19204 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:59.531144   19204 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:59.531179   19204 pod_ready.go:81] duration metric: took 9.512530001s waiting for pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:59.531194   19204 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-psg4t" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:57.227423   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:59.722421   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:57.860847   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:00.383065   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:00.194401   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:02.693032   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:01.045104   19204 pod_ready.go:92] pod "kube-proxy-psg4t" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.045136   19204 pod_ready.go:81] duration metric: took 1.513934389s waiting for pod "kube-proxy-psg4t" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.045162   19204 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:03.065559   19204 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:02.225371   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:04.231432   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:01.360648   18929 pod_ready.go:92] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.360679   18929 pod_ready.go:81] duration metric: took 40.519291305s waiting for pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.360692   18929 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.377816   18929 pod_ready.go:92] pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.377835   18929 pod_ready.go:81] duration metric: took 17.135128ms waiting for pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.377844   18929 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.384900   18929 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.384919   18929 pod_ready.go:81] duration metric: took 7.067915ms waiting for pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.384928   18929 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.391593   18929 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.391615   18929 pod_ready.go:81] duration metric: took 6.679953ms waiting for pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.391628   18929 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8h6xz" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.397839   18929 pod_ready.go:92] pod "kube-proxy-8h6xz" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.397859   18929 pod_ready.go:81] duration metric: took 6.224125ms waiting for pod "kube-proxy-8h6xz" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.397870   18929 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.757203   18929 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.757231   18929 pod_ready.go:81] duration metric: took 359.352415ms waiting for pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.757245   18929 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:04.166965   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:05.190883   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:07.691413   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:05.560049   19204 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:07.563106   19204 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:10.058732   19204 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:06.241105   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:08.721067   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:10.729982   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:06.173818   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:08.671197   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:10.190249   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:12.190937   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:11.058551   19204 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:11.058589   19204 pod_ready.go:81] duration metric: took 10.013415785s waiting for pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:11.058602   19204 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:13.079741   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:13.222923   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:15.223480   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:11.169568   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:13.668888   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:15.675907   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:14.691328   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:17.193097   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:15.574185   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:18.080714   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:17.721688   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:20.223136   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:18.166872   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:20.167888   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:19.690743   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:21.695097   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:20.573176   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:22.575373   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:25.080599   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:22.721982   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:24.723334   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:22.674385   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:25.168465   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:24.191127   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:26.692188   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:27.576508   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:30.077538   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:26.725975   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:29.222550   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:27.667108   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:29.672819   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:29.190076   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:31.191096   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:32.573255   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:34.574846   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:31.222778   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:33.721695   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:35.722989   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:32.167222   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:34.168925   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:33.691602   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:35.693194   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:38.192247   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:36.575818   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:39.074280   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:37.724177   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:40.222061   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:36.667227   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:38.667709   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:40.193105   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:42.691214   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:41.577819   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:44.074371   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:42.222318   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:44.223676   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:41.169382   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:43.169678   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:45.172140   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:44.692521   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:46.693152   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:46.080520   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:48.574175   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:46.226822   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:48.723407   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:50.723464   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:47.669324   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:50.168305   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:49.191566   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:51.192223   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:50.574493   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:53.072736   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:55.075288   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:53.226025   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:55.722244   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:52.667088   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:54.668826   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:53.690899   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:55.692317   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:58.190689   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:57.076942   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:59.573822   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:58.225641   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:00.721925   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:57.165321   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:59.171812   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:00.194014   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:02.691574   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:01.573901   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:04.073928   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:02.724585   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:04.724644   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:01.175154   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:03.669857   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:05.191832   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:07.693327   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:06.576903   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:09.078443   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:07.222275   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:09.224637   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:06.167190   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:08.168551   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:10.668660   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:10.191769   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:12.693193   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:11.574665   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:13.576508   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:11.224838   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:13.721159   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:15.727256   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:12.670244   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:15.167885   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:15.194325   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:17.692108   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:16.072818   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:18.078890   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:17.729812   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:20.226491   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:17.177047   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:19.217251   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:20.192280   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:22.693518   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:20.574552   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:22.574777   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:24.577476   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:22.727579   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:24.728352   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:21.668537   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:24.167106   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:25.191135   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:27.191723   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:27.075236   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:29.574554   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:27.223601   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:29.225348   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:26.172206   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:28.666902   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:30.667512   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:29.693817   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:32.192170   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:32.073947   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:34.076857   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:31.806875   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:34.222064   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:32.670097   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:35.167425   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:34.193574   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:36.692421   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:36.575233   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:39.074418   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:36.223456   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:38.224575   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:40.721673   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:37.168398   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:39.172793   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:39.196016   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:41.690324   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:41.075116   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:43.576123   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:42.724088   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:44.724675   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:41.674073   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:44.170704   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:43.693077   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:45.693362   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:48.190525   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:45.576264   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:48.077395   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:46.729980   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:49.221967   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:46.171454   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:48.665714   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:50.668334   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:50.193564   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:52.691234   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:50.572686   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:52.574382   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:55.074999   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:51.222668   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:53.226343   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:55.725259   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:52.673171   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:55.168585   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:54.692513   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:57.191126   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:57.079875   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:59.573017   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:58.221527   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:00.227502   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:57.671255   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:00.168665   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:59.691534   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:01.693478   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:01.582883   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:04.072426   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:02.722966   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:04.727296   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:02.173240   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:04.665480   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:04.191798   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:06.691447   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:06.073825   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:08.074664   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:10.075325   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:07.223517   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:09.721892   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:06.667330   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:08.671220   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:09.191192   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:11.691389   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:12.076107   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:14.575585   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:11.725914   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:13.730699   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:11.169385   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:13.673312   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:14.191060   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:16.192184   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:16.576492   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:19.076650   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:16.225569   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:18.724188   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:20.724698   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:16.165664   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:18.166105   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:20.166339   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:18.691871   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:20.691922   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:23.191074   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:21.574173   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:24.075930   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:23.223119   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:25.223978   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:22.173729   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:24.666435   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:25.692064   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:27.693165   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:26.574028   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:28.577627   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:27.723162   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:30.225428   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:26.666698   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:28.667290   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:30.669320   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:30.191236   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:32.194129   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:31.078550   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:33.574708   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:32.272795   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:34.721477   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:32.670349   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:35.166861   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:34.691270   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:36.693071   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:36.073462   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:38.075367   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:36.731674   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:39.226976   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:37.170645   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:39.724821   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:39.190190   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:41.192605   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:43.194313   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:40.572815   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:43.074323   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:41.728026   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:44.222098   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:45.713684   18923 pod_ready.go:81] duration metric: took 4m0.016600156s waiting for pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace to be "Ready" ...
	E0816 22:41:45.713707   18923 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:41:45.713739   18923 pod_ready.go:38] duration metric: took 4m11.701504099s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:41:45.713769   18923 kubeadm.go:604] restartCluster took 4m33.579475629s
	W0816 22:41:45.713944   18923 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:41:45.714027   18923 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0816 22:41:42.167746   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:44.671010   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:45.690207   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:47.696181   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:45.573577   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:47.577169   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:50.074120   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:49.532312   18923 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.817885262s)
	I0816 22:41:49.532396   18923 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:41:49.547377   18923 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:41:49.547460   18923 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:41:49.586205   18923 cri.go:76] found id: "c265ff52803b42d9b0ff6036d0ca9fe4b655c9875474baa9a0a3f7255d230e17"
	I0816 22:41:49.586231   18923 cri.go:76] found id: ""
	W0816 22:41:49.586237   18923 kubeadm.go:840] found 1 kube-system containers to stop
	I0816 22:41:49.586243   18923 cri.go:221] Stopping containers: [c265ff52803b42d9b0ff6036d0ca9fe4b655c9875474baa9a0a3f7255d230e17]
	I0816 22:41:49.586286   18923 ssh_runner.go:149] Run: which crictl
	I0816 22:41:49.590992   18923 ssh_runner.go:149] Run: sudo /bin/crictl stop c265ff52803b42d9b0ff6036d0ca9fe4b655c9875474baa9a0a3f7255d230e17
	I0816 22:41:49.626874   18923 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:41:49.635033   18923 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:41:49.643072   18923 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:41:49.643114   18923 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0816 22:41:46.671498   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:49.167852   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:50.191302   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:52.194912   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:52.573508   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:54.574289   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:51.170118   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:53.672114   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:54.691353   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:56.691660   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:57.075408   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:59.575201   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:56.166934   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:58.175241   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:00.668070   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:58.692572   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:00.693110   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:02.693563   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:02.073370   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:04.074072   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:03.171450   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:05.675018   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:05.192214   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:07.692700   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:08.829041   18923 out.go:204]   - Generating certificates and keys ...
	I0816 22:42:08.831708   18923 out.go:204]   - Booting up control plane ...
	I0816 22:42:08.834200   18923 out.go:204]   - Configuring RBAC rules ...
	I0816 22:42:08.836416   18923 cni.go:93] Creating CNI manager for ""
	I0816 22:42:08.836433   18923 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:42:06.578343   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:08.578554   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:08.838017   18923 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:42:08.838073   18923 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:42:08.846501   18923 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:42:08.869457   18923 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:42:08.869501   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:08.869527   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=no-preload-20210816223156-6986 minikube.k8s.io/updated_at=2021_08_16T22_42_08_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:09.240543   18923 ops.go:34] apiserver oom_adj: -16
	I0816 22:42:09.240662   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:09.839173   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:10.338906   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:10.839126   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:08.175656   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:10.670201   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:09.693093   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:12.193949   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:11.076847   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:13.572667   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:11.339623   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:11.839145   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:12.339335   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:12.839352   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:13.339016   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:13.838633   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:14.339209   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:14.839574   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:15.338605   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:15.838986   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:13.166828   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:15.170558   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:14.195434   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:16.691097   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:17.183312   18635 pod_ready.go:81] duration metric: took 4m0.398928004s waiting for pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace to be "Ready" ...
	E0816 22:42:17.183337   18635 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:42:17.183357   18635 pod_ready.go:38] duration metric: took 4m51.857756569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:42:17.183387   18635 kubeadm.go:604] restartCluster took 5m19.62322748s
	W0816 22:42:17.183554   18635 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:42:17.183589   18635 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0816 22:42:15.573445   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:17.576213   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:19.578780   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:16.339618   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:16.839112   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:17.338889   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:17.838606   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:18.339509   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:18.839537   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:19.338632   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:19.839240   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:20.339527   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:20.838664   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:17.671899   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:19.672963   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:20.586991   18635 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.403367986s)
	I0816 22:42:20.587083   18635 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:42:20.603414   18635 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:42:20.603499   18635 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:42:20.644469   18635 cri.go:76] found id: ""
	I0816 22:42:20.644547   18635 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:42:20.654179   18635 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:42:20.664747   18635 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:42:20.664790   18635 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
	I0816 22:42:21.326940   18635 out.go:204]   - Generating certificates and keys ...
	I0816 22:42:21.189008   18923 kubeadm.go:985] duration metric: took 12.319564991s to wait for elevateKubeSystemPrivileges.
	I0816 22:42:21.189042   18923 kubeadm.go:392] StartCluster complete in 5m9.132482632s
	I0816 22:42:21.189068   18923 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:42:21.189186   18923 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:42:21.191084   18923 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0816 22:42:21.253468   18923 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0816 22:42:22.263255   18923 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20210816223156-6986" rescaled to 1
	I0816 22:42:22.263323   18923 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.116.66 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0816 22:42:22.265111   18923 out.go:177] * Verifying Kubernetes components...
	I0816 22:42:22.265169   18923 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:42:22.263389   18923 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:42:22.263413   18923 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:42:22.265318   18923 addons.go:59] Setting storage-provisioner=true in profile "no-preload-20210816223156-6986"
	I0816 22:42:22.265343   18923 addons.go:135] Setting addon storage-provisioner=true in "no-preload-20210816223156-6986"
	I0816 22:42:22.265343   18923 addons.go:59] Setting dashboard=true in profile "no-preload-20210816223156-6986"
	W0816 22:42:22.265352   18923 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:42:22.265365   18923 addons.go:135] Setting addon dashboard=true in "no-preload-20210816223156-6986"
	W0816 22:42:22.265384   18923 addons.go:147] addon dashboard should already be in state true
	I0816 22:42:22.263563   18923 config.go:177] Loaded profile config "no-preload-20210816223156-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0816 22:42:22.265401   18923 addons.go:59] Setting metrics-server=true in profile "no-preload-20210816223156-6986"
	I0816 22:42:22.265412   18923 host.go:66] Checking if "no-preload-20210816223156-6986" exists ...
	I0816 22:42:22.265427   18923 addons.go:135] Setting addon metrics-server=true in "no-preload-20210816223156-6986"
	W0816 22:42:22.265437   18923 addons.go:147] addon metrics-server should already be in state true
	I0816 22:42:22.265384   18923 addons.go:59] Setting default-storageclass=true in profile "no-preload-20210816223156-6986"
	I0816 22:42:22.265462   18923 host.go:66] Checking if "no-preload-20210816223156-6986" exists ...
	I0816 22:42:22.265390   18923 host.go:66] Checking if "no-preload-20210816223156-6986" exists ...
	I0816 22:42:22.265461   18923 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20210816223156-6986"
	I0816 22:42:22.265940   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.265944   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.265957   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.265942   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.265975   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.265986   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.266089   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.266123   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.281969   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45777
	I0816 22:42:22.282708   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.282877   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40837
	I0816 22:42:22.283046   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42349
	I0816 22:42:22.283302   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.283322   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.283427   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.283650   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.283893   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.284078   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.284092   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.284330   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.284347   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.284461   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.284627   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.284665   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.284970   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.285003   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.285116   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.285285   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.293128   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38523
	I0816 22:42:22.293558   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.294059   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.294082   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.294429   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.294987   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.295053   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.298092   18923 addons.go:135] Setting addon default-storageclass=true in "no-preload-20210816223156-6986"
	W0816 22:42:22.298118   18923 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:42:22.298147   18923 host.go:66] Checking if "no-preload-20210816223156-6986" exists ...
	I0816 22:42:22.298560   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.298601   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.302416   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44833
	I0816 22:42:22.302994   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.303562   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.303593   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.304002   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.304209   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.305854   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34121
	I0816 22:42:22.306273   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.307236   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.307263   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.307631   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.307783   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.308340   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .DriverName
	I0816 22:42:22.310958   18923 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:42:22.311023   18923 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:42:22.311044   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:42:22.311064   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHHostname
	I0816 22:42:22.311377   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .DriverName
	I0816 22:42:22.313216   18923 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:42:22.311947   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45779
	I0816 22:42:22.313321   18923 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:42:22.313337   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:42:22.312981   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38669
	I0816 22:42:22.313354   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHHostname
	I0816 22:42:22.313674   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.313848   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.314124   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.314144   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.314391   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.314413   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.314493   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.314698   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.314875   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.315544   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.315591   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.319514   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .DriverName
	I0816 22:42:22.319736   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.321507   18923 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:42:22.320102   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:66:08", ip: ""} in network mk-no-preload-20210816223156-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:37:07 +0000 UTC Type:0 Mac:52:54:00:1e:66:08 Iaid: IPaddr:192.168.116.66 Prefix:24 Hostname:no-preload-20210816223156-6986 Clientid:01:52:54:00:1e:66:08}
	I0816 22:42:22.320309   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHPort
	I0816 22:42:22.320694   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.321331   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHPort
	I0816 22:42:22.321669   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:66:08", ip: ""} in network mk-no-preload-20210816223156-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:37:07 +0000 UTC Type:0 Mac:52:54:00:1e:66:08 Iaid: IPaddr:192.168.116.66 Prefix:24 Hostname:no-preload-20210816223156-6986 Clientid:01:52:54:00:1e:66:08}
	I0816 22:42:22.321594   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined IP address 192.168.116.66 and MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.180281   18635 out.go:204]   - Booting up control plane ...
	I0816 22:42:22.073806   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:24.079495   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:22.323189   18923 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:42:22.321708   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined IP address 192.168.116.66 and MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.321766   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHKeyPath
	I0816 22:42:22.321808   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHKeyPath
	I0816 22:42:22.323243   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:42:22.323341   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:42:22.323363   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHHostname
	I0816 22:42:22.323468   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHUsername
	I0816 22:42:22.323473   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHUsername
	I0816 22:42:22.323663   18923 sshutil.go:53] new ssh client: &{IP:192.168.116.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816223156-6986/id_rsa Username:docker}
	I0816 22:42:22.323678   18923 sshutil.go:53] new ssh client: &{IP:192.168.116.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816223156-6986/id_rsa Username:docker}
	I0816 22:42:22.328724   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45831
	I0816 22:42:22.329130   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.329535   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.329554   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.329851   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.329938   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.330124   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.330329   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:66:08", ip: ""} in network mk-no-preload-20210816223156-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:37:07 +0000 UTC Type:0 Mac:52:54:00:1e:66:08 Iaid: IPaddr:192.168.116.66 Prefix:24 Hostname:no-preload-20210816223156-6986 Clientid:01:52:54:00:1e:66:08}
	I0816 22:42:22.330363   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined IP address 192.168.116.66 and MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.330478   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHPort
	I0816 22:42:22.330620   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHKeyPath
	I0816 22:42:22.330750   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHUsername
	I0816 22:42:22.330873   18923 sshutil.go:53] new ssh client: &{IP:192.168.116.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816223156-6986/id_rsa Username:docker}
	I0816 22:42:22.333001   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .DriverName
	I0816 22:42:22.333246   18923 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:42:22.333262   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:42:22.333279   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHHostname
	I0816 22:42:22.338603   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.339024   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:66:08", ip: ""} in network mk-no-preload-20210816223156-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:37:07 +0000 UTC Type:0 Mac:52:54:00:1e:66:08 Iaid: IPaddr:192.168.116.66 Prefix:24 Hostname:no-preload-20210816223156-6986 Clientid:01:52:54:00:1e:66:08}
	I0816 22:42:22.339055   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined IP address 192.168.116.66 and MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.339242   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHPort
	I0816 22:42:22.339393   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHKeyPath
	I0816 22:42:22.339570   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHUsername
	I0816 22:42:22.339731   18923 sshutil.go:53] new ssh client: &{IP:192.168.116.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816223156-6986/id_rsa Username:docker}
	I0816 22:42:22.671302   18923 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:42:22.671331   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:42:22.674471   18923 node_ready.go:35] waiting up to 6m0s for node "no-preload-20210816223156-6986" to be "Ready" ...
	I0816 22:42:22.674764   18923 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.116.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:42:22.680985   18923 node_ready.go:49] node "no-preload-20210816223156-6986" has status "Ready":"True"
	I0816 22:42:22.681006   18923 node_ready.go:38] duration metric: took 6.219914ms waiting for node "no-preload-20210816223156-6986" to be "Ready" ...
	I0816 22:42:22.681017   18923 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:42:22.690584   18923 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:22.758871   18923 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:42:22.908102   18923 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:42:22.908132   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:42:23.011738   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:42:23.011768   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:42:23.048103   18923 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:42:23.113442   18923 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:42:23.113472   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:42:23.311431   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:42:23.311461   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:42:23.413450   18923 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:42:23.601523   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:42:23.601554   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:42:23.797882   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:42:23.797908   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:42:23.957080   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:42:23.957109   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:42:24.496102   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:42:24.496134   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:42:24.715720   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:42:24.715807   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:42:24.725833   18923 pod_ready.go:102] pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:24.991135   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:42:24.991165   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:42:25.061259   18923 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.116.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.386242884s)
	I0816 22:42:25.061297   18923 start.go:728] {"host.minikube.internal": 192.168.116.1} host record injected into CoreDNS
	I0816 22:42:25.085411   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:42:25.085463   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:42:25.132722   18923 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:42:25.402705   18923 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.64379015s)
	I0816 22:42:25.402772   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.402790   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.403123   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.403222   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:25.403245   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.403270   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.403197   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:25.403597   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:25.404574   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.404594   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:25.404607   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.404616   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.404837   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.404878   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:25.431424   18923 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.383276848s)
	I0816 22:42:25.431470   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.431484   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.431767   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.431781   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:25.431788   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:25.431799   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.431810   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.432092   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.432111   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:22.168138   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:24.174050   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:26.094382   18923 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.680878058s)
	I0816 22:42:26.094446   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:26.094474   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:26.094773   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:26.094830   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:26.094859   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:26.094885   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:26.094774   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:26.095167   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:26.095182   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:26.095193   18923 addons.go:313] Verifying addon metrics-server=true in "no-preload-20210816223156-6986"
	I0816 22:42:26.855647   18923 pod_ready.go:102] pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:27.149522   18923 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.016735128s)
	I0816 22:42:27.149590   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:27.149605   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:27.149955   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:27.150053   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:27.150073   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:27.150083   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:27.150094   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:27.150330   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:27.150347   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:26.575022   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:28.575534   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:27.153345   18923 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0816 22:42:27.153375   18923 addons.go:344] enableAddons completed in 4.88997344s
	I0816 22:42:28.729990   18923 pod_ready.go:92] pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:28.730033   18923 pod_ready.go:81] duration metric: took 6.039413295s waiting for pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:28.730047   18923 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-vf8l9" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.743600   18923 pod_ready.go:97] error getting pod "coredns-78fcd69978-vf8l9" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-vf8l9" not found
	I0816 22:42:30.743642   18923 pod_ready.go:81] duration metric: took 2.013586217s waiting for pod "coredns-78fcd69978-vf8l9" in "kube-system" namespace to be "Ready" ...
	E0816 22:42:30.743656   18923 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-78fcd69978-vf8l9" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-vf8l9" not found
	I0816 22:42:30.743666   18923 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.757721   18923 pod_ready.go:92] pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.757745   18923 pod_ready.go:81] duration metric: took 14.064042ms waiting for pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.757758   18923 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.767053   18923 pod_ready.go:92] pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.767087   18923 pod_ready.go:81] duration metric: took 9.317684ms waiting for pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.767102   18923 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.777595   18923 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.777619   18923 pod_ready.go:81] duration metric: took 10.507966ms waiting for pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.777632   18923 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jhqbx" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.790967   18923 pod_ready.go:92] pod "kube-proxy-jhqbx" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.790991   18923 pod_ready.go:81] duration metric: took 13.350231ms waiting for pod "kube-proxy-jhqbx" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.791003   18923 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:26.174733   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:28.675892   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:30.951607   18923 pod_ready.go:92] pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.951630   18923 pod_ready.go:81] duration metric: took 160.617881ms waiting for pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.951642   18923 pod_ready.go:38] duration metric: took 8.270610362s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:42:30.951663   18923 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:42:30.951723   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:42:30.970609   18923 api_server.go:70] duration metric: took 8.707242252s to wait for apiserver process to appear ...
	I0816 22:42:30.970637   18923 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:42:30.970650   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:42:30.979459   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 200:
	ok
	I0816 22:42:30.980742   18923 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:42:30.980766   18923 api_server.go:129] duration metric: took 10.122149ms to wait for apiserver health ...
	I0816 22:42:30.980777   18923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:42:31.156911   18923 system_pods.go:59] 8 kube-system pods found
	I0816 22:42:31.156942   18923 system_pods.go:61] "coredns-78fcd69978-9rlk6" [83d3b042-5692-4c44-b6f2-65020120666e] Running
	I0816 22:42:31.156949   18923 system_pods.go:61] "etcd-no-preload-20210816223156-6986" [4ea832d0-9ae2-4d9e-9aed-aa581e1ef4cf] Running
	I0816 22:42:31.156956   18923 system_pods.go:61] "kube-apiserver-no-preload-20210816223156-6986" [048331fe-fed3-4cca-a2c6-3b377e26647b] Running
	I0816 22:42:31.156965   18923 system_pods.go:61] "kube-controller-manager-no-preload-20210816223156-6986" [704424de-e6c8-4c70-bd80-c9b68c4c4b57] Running
	I0816 22:42:31.156971   18923 system_pods.go:61] "kube-proxy-jhqbx" [edacd358-46da-4db4-a8db-098f6edefb76] Running
	I0816 22:42:31.156977   18923 system_pods.go:61] "kube-scheduler-no-preload-20210816223156-6986" [7674933f-6de0-4090-867c-c082293d963c] Running
	I0816 22:42:31.156988   18923 system_pods.go:61] "metrics-server-7c784ccb57-dfjww" [7a744b20-6d7f-4001-a322-7e5615cbf15f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:42:31.156998   18923 system_pods.go:61] "storage-provisioner" [c8a5ee3a-f5df-4eeb-92b1-443ecef04ea2] Running
	I0816 22:42:31.157005   18923 system_pods.go:74] duration metric: took 176.222595ms to wait for pod list to return data ...
	I0816 22:42:31.157016   18923 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:42:31.345286   18923 default_sa.go:45] found service account: "default"
	I0816 22:42:31.345311   18923 default_sa.go:55] duration metric: took 188.289571ms for default service account to be created ...
	I0816 22:42:31.345319   18923 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:42:31.555450   18923 system_pods.go:86] 8 kube-system pods found
	I0816 22:42:31.555481   18923 system_pods.go:89] "coredns-78fcd69978-9rlk6" [83d3b042-5692-4c44-b6f2-65020120666e] Running
	I0816 22:42:31.555490   18923 system_pods.go:89] "etcd-no-preload-20210816223156-6986" [4ea832d0-9ae2-4d9e-9aed-aa581e1ef4cf] Running
	I0816 22:42:31.555497   18923 system_pods.go:89] "kube-apiserver-no-preload-20210816223156-6986" [048331fe-fed3-4cca-a2c6-3b377e26647b] Running
	I0816 22:42:31.555503   18923 system_pods.go:89] "kube-controller-manager-no-preload-20210816223156-6986" [704424de-e6c8-4c70-bd80-c9b68c4c4b57] Running
	I0816 22:42:31.555509   18923 system_pods.go:89] "kube-proxy-jhqbx" [edacd358-46da-4db4-a8db-098f6edefb76] Running
	I0816 22:42:31.555515   18923 system_pods.go:89] "kube-scheduler-no-preload-20210816223156-6986" [7674933f-6de0-4090-867c-c082293d963c] Running
	I0816 22:42:31.555529   18923 system_pods.go:89] "metrics-server-7c784ccb57-dfjww" [7a744b20-6d7f-4001-a322-7e5615cbf15f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:42:31.555541   18923 system_pods.go:89] "storage-provisioner" [c8a5ee3a-f5df-4eeb-92b1-443ecef04ea2] Running
	I0816 22:42:31.555553   18923 system_pods.go:126] duration metric: took 210.228822ms to wait for k8s-apps to be running ...
	I0816 22:42:31.555566   18923 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:42:31.555615   18923 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:42:31.581892   18923 system_svc.go:56] duration metric: took 26.318542ms WaitForService to wait for kubelet.
	I0816 22:42:31.581920   18923 kubeadm.go:547] duration metric: took 9.318562144s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:42:31.581949   18923 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:42:31.744656   18923 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:42:31.744683   18923 node_conditions.go:123] node cpu capacity is 2
	I0816 22:42:31.744699   18923 node_conditions.go:105] duration metric: took 162.745304ms to run NodePressure ...
	I0816 22:42:31.744708   18923 start.go:231] waiting for startup goroutines ...
	I0816 22:42:31.799332   18923 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0816 22:42:31.801873   18923 out.go:177] 
	W0816 22:42:31.802045   18923 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0816 22:42:31.803807   18923 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0816 22:42:31.805603   18923 out.go:177] * Done! kubectl is now configured to use "no-preload-20210816223156-6986" cluster and "default" namespace by default
	I0816 22:42:34.356504   18635 out.go:204]   - Configuring RBAC rules ...
	I0816 22:42:34.810198   18635 cni.go:93] Creating CNI manager for ""
	I0816 22:42:34.810227   18635 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:42:30.576523   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:33.074048   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:35.075110   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:31.178766   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:33.673945   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:35.674516   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:34.812149   18635 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:42:34.812218   18635 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:42:34.823097   18635 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:42:34.840052   18635 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:42:34.840175   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=old-k8s-version-20210816223154-6986 minikube.k8s.io/updated_at=2021_08_16T22_42_34_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:34.840179   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:35.279911   18635 ops.go:34] apiserver oom_adj: 16
	I0816 22:42:35.279930   18635 ops.go:39] adjusting apiserver oom_adj to -10
	I0816 22:42:35.279944   18635 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:42:35.279997   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:35.887807   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:36.388228   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:36.888072   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:37.388131   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:37.888197   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:37.075407   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:39.574205   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:38.169080   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:40.669388   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:38.388192   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:38.887529   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:39.387314   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:39.887397   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:40.388222   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:40.887817   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:41.388165   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:41.887336   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:42.387710   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:42.887452   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:41.575892   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:44.074399   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:43.168677   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:45.674667   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:43.388233   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:43.888191   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:44.388190   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:44.888073   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:45.387300   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:45.887633   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:46.388266   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:46.887918   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:47.387283   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:47.887770   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:46.074552   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:48.573015   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:48.387776   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:48.888189   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:49.388262   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:49.887594   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:50.137803   18635 kubeadm.go:985] duration metric: took 15.297678668s to wait for elevateKubeSystemPrivileges.
	I0816 22:42:50.137838   18635 kubeadm.go:392] StartCluster complete in 5m52.622280434s
	I0816 22:42:50.137865   18635 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:42:50.137996   18635 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:42:50.140032   18635 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:42:50.769953   18635 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20210816223154-6986" rescaled to 1
	I0816 22:42:50.770028   18635 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.94.246 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0816 22:42:50.771768   18635 out.go:177] * Verifying Kubernetes components...
	I0816 22:42:50.771833   18635 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:42:50.770075   18635 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:42:50.770097   18635 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:42:50.770295   18635 config.go:177] Loaded profile config "old-k8s-version-20210816223154-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0816 22:42:50.771981   18635 addons.go:59] Setting default-storageclass=true in profile "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.771981   18635 addons.go:59] Setting storage-provisioner=true in profile "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.771999   18635 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.772004   18635 addons.go:135] Setting addon storage-provisioner=true in "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.771995   18635 addons.go:59] Setting dashboard=true in profile "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.772027   18635 addons.go:135] Setting addon dashboard=true in "old-k8s-version-20210816223154-6986"
	W0816 22:42:50.772039   18635 addons.go:147] addon dashboard should already be in state true
	I0816 22:42:50.772074   18635 host.go:66] Checking if "old-k8s-version-20210816223154-6986" exists ...
	I0816 22:42:50.771981   18635 addons.go:59] Setting metrics-server=true in profile "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.772106   18635 addons.go:135] Setting addon metrics-server=true in "old-k8s-version-20210816223154-6986"
	W0816 22:42:50.772118   18635 addons.go:147] addon metrics-server should already be in state true
	I0816 22:42:50.772143   18635 host.go:66] Checking if "old-k8s-version-20210816223154-6986" exists ...
	W0816 22:42:50.772012   18635 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:42:50.772202   18635 host.go:66] Checking if "old-k8s-version-20210816223154-6986" exists ...
	I0816 22:42:50.772450   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.772491   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.772514   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.772550   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.772562   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.772590   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.772850   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.772907   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.786384   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43919
	I0816 22:42:50.786896   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.787436   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.787463   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.787854   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.788085   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.788330   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36229
	I0816 22:42:50.788749   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.789268   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.789290   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.789622   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.790176   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.790222   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.795830   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42399
	I0816 22:42:50.795865   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0816 22:42:50.796347   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.796355   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.796868   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.796888   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.796872   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.796936   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.797257   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.797329   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.797807   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.797848   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.797871   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.797906   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.799195   18635 addons.go:135] Setting addon default-storageclass=true in "old-k8s-version-20210816223154-6986"
	W0816 22:42:50.799218   18635 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:42:50.799243   18635 host.go:66] Checking if "old-k8s-version-20210816223154-6986" exists ...
	I0816 22:42:50.799640   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.799681   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.810531   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40999
	I0816 22:42:50.811204   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.811785   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.811802   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.812347   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.812540   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.815618   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44099
	I0816 22:42:50.815827   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34305
	I0816 22:42:50.816141   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.816227   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.816697   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.816714   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.816835   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.816854   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.817100   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.817172   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.817189   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .DriverName
	I0816 22:42:50.817352   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.819885   18635 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:42:50.817704   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.820954   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .DriverName
	I0816 22:42:50.821662   18635 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:42:50.821713   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.821719   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:42:50.821731   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:42:50.821750   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHHostname
	I0816 22:42:50.823437   18635 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:42:50.822272   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33579
	I0816 22:42:50.823493   18635 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:42:50.823505   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:42:50.823522   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHHostname
	I0816 22:42:50.823823   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.824293   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.824311   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.824702   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.824895   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.828911   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.828954   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .DriverName
	I0816 22:42:47.677798   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:50.171236   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:50.830871   18635 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:42:50.830990   18635 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:42:50.831003   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:42:50.831019   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHHostname
	I0816 22:42:50.829748   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:23:71", ip: ""} in network mk-old-k8s-version-20210816223154-6986: {Iface:virbr6 ExpiryTime:2021-08-16 23:36:29 +0000 UTC Type:0 Mac:52:54:00:bf:23:71 Iaid: IPaddr:192.168.94.246 Prefix:24 Hostname:old-k8s-version-20210816223154-6986 Clientid:01:52:54:00:bf:23:71}
	I0816 22:42:50.831084   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined IP address 192.168.94.246 and MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.829926   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHPort
	I0816 22:42:50.830586   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.831142   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:23:71", ip: ""} in network mk-old-k8s-version-20210816223154-6986: {Iface:virbr6 ExpiryTime:2021-08-16 23:36:29 +0000 UTC Type:0 Mac:52:54:00:bf:23:71 Iaid: IPaddr:192.168.94.246 Prefix:24 Hostname:old-k8s-version-20210816223154-6986 Clientid:01:52:54:00:bf:23:71}
	I0816 22:42:50.831171   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined IP address 192.168.94.246 and MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.831303   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHKeyPath
	I0816 22:42:50.831452   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHUsername
	I0816 22:42:50.831626   18635 sshutil.go:53] new ssh client: &{IP:192.168.94.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816223154-6986/id_rsa Username:docker}
	I0816 22:42:50.831935   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHPort
	I0816 22:42:50.832101   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHKeyPath
	I0816 22:42:50.832284   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHUsername
	I0816 22:42:50.832496   18635 sshutil.go:53] new ssh client: &{IP:192.168.94.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816223154-6986/id_rsa Username:docker}
	I0816 22:42:50.835565   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34581
	I0816 22:42:50.836045   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.836624   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.836646   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.836952   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.837022   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.837210   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.837385   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:23:71", ip: ""} in network mk-old-k8s-version-20210816223154-6986: {Iface:virbr6 ExpiryTime:2021-08-16 23:36:29 +0000 UTC Type:0 Mac:52:54:00:bf:23:71 Iaid: IPaddr:192.168.94.246 Prefix:24 Hostname:old-k8s-version-20210816223154-6986 Clientid:01:52:54:00:bf:23:71}
	I0816 22:42:50.837420   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined IP address 192.168.94.246 and MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.837596   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHPort
	I0816 22:42:50.837797   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHKeyPath
	I0816 22:42:50.837973   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHUsername
	I0816 22:42:50.838150   18635 sshutil.go:53] new ssh client: &{IP:192.168.94.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816223154-6986/id_rsa Username:docker}
	I0816 22:42:50.839968   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .DriverName
	I0816 22:42:50.840224   18635 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:42:50.840241   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:42:50.840256   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHHostname
	I0816 22:42:50.846248   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.846622   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:23:71", ip: ""} in network mk-old-k8s-version-20210816223154-6986: {Iface:virbr6 ExpiryTime:2021-08-16 23:36:29 +0000 UTC Type:0 Mac:52:54:00:bf:23:71 Iaid: IPaddr:192.168.94.246 Prefix:24 Hostname:old-k8s-version-20210816223154-6986 Clientid:01:52:54:00:bf:23:71}
	I0816 22:42:50.846648   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined IP address 192.168.94.246 and MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.846901   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHPort
	I0816 22:42:50.847072   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHKeyPath
	I0816 22:42:50.847256   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHUsername
	I0816 22:42:50.847384   18635 sshutil.go:53] new ssh client: &{IP:192.168.94.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816223154-6986/id_rsa Username:docker}
	I0816 22:42:51.069324   18635 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20210816223154-6986" to be "Ready" ...
	I0816 22:42:51.069363   18635 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:42:51.074198   18635 node_ready.go:49] node "old-k8s-version-20210816223154-6986" has status "Ready":"True"
	I0816 22:42:51.074219   18635 node_ready.go:38] duration metric: took 4.853226ms waiting for node "old-k8s-version-20210816223154-6986" to be "Ready" ...
	I0816 22:42:51.074228   18635 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:42:51.079427   18635 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:51.095977   18635 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:42:51.095994   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:42:51.114667   18635 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:42:51.127402   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:42:51.127423   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:42:51.139080   18635 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:42:51.142203   18635 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:42:51.142227   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:42:51.184024   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:42:51.184049   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:42:51.229690   18635 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:42:51.229719   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:42:51.258163   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:42:51.258186   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:42:51.292848   18635 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:42:51.348950   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:42:51.348979   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:42:51.432982   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:42:51.433017   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:42:51.500730   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:42:51.500762   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:42:51.566104   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:42:51.566132   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:42:51.669547   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:42:51.669569   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:42:51.755011   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:42:51.755042   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:42:51.807684   18635 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:42:52.571594   18635 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.502197835s)
	I0816 22:42:52.571636   18635 start.go:728] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0816 22:42:52.759651   18635 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.644944376s)
	I0816 22:42:52.759687   18635 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.620572399s)
	I0816 22:42:52.759727   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.759743   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.759751   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.759765   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.760012   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.760058   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:52.760071   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.760080   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.760115   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.760131   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:52.760156   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.760170   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.761684   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.761690   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | Closing plugin on server side
	I0816 22:42:52.761704   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:52.761719   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | Closing plugin on server side
	I0816 22:42:52.761689   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.761794   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:52.761806   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.761817   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.762085   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.762108   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:53.390381   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:53.699731   18635 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.406829667s)
	I0816 22:42:53.699820   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:53.699836   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:53.700202   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:53.700222   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:53.700238   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:53.700249   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:53.700503   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:53.700523   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:53.700538   18635 addons.go:313] Verifying addon metrics-server=true in "old-k8s-version-20210816223154-6986"
	I0816 22:42:54.131359   18635 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.323617191s)
	I0816 22:42:54.131419   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:54.131434   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:54.131720   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | Closing plugin on server side
	I0816 22:42:54.131759   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:54.131767   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:54.131782   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:54.131793   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:54.132029   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:54.132048   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:50.574063   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:53.075372   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:52.670047   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:54.673975   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:54.134079   18635 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0816 22:42:54.134104   18635 addons.go:344] enableAddons completed in 3.364015112s
	I0816 22:42:55.589126   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:57.594328   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	f408bf522922a       523cad1a4df73       22 seconds ago       Exited              dashboard-metrics-scraper   1                   cd35b65dbf056
	64955f6935975       9a07b5b4bfac0       29 seconds ago       Running             kubernetes-dashboard        0                   53b71c03b338b
	89a458187aec6       6e38f40d628db       31 seconds ago       Exited              storage-provisioner         0                   48a5117f3b82f
	e0bb5a872be5d       8d147537fb7d1       35 seconds ago       Running             coredns                     0                   d3e8dbb1065a0
	4cbc69479c98f       ea6b13ed84e03       37 seconds ago       Running             kube-proxy                  0                   a2d8648503867
	1e3071c5b1c50       0048118155842       About a minute ago   Running             etcd                        2                   fdbc7e8443532
	cb12185bdacc3       7da2efaa5b480       About a minute ago   Running             kube-scheduler              2                   10155044a33d1
	a5a4025cb2615       cf9cba6c3e4a8       About a minute ago   Running             kube-controller-manager     2                   bbeb739aeb370
	f7e4e0952db6f       b2462aa94d403       About a minute ago   Running             kube-apiserver              2                   a5ef942d9974f
	f45ec9b98801b       56cc512116c8f       5 minutes ago        Exited              busybox                     1                   6f71853f758e6
	c265ff52803b4       8d147537fb7d1       5 minutes ago        Exited              coredns                     1                   6f468f8c94515
	
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2021-08-16 22:37:06 UTC, end at Mon 2021-08-16 22:43:00 UTC. --
	Aug 16 22:42:37 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:37.130703264Z" level=info msg="StartContainer for \"6e961ef114c53f4caadc1d127a23aeb672481fd3e699e195475ad5da9efadb60\" returns successfully"
	Aug 16 22:42:37 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:37.177108643Z" level=info msg="Finish piping stderr of container \"6e961ef114c53f4caadc1d127a23aeb672481fd3e699e195475ad5da9efadb60\""
	Aug 16 22:42:37 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:37.177347709Z" level=info msg="Finish piping stdout of container \"6e961ef114c53f4caadc1d127a23aeb672481fd3e699e195475ad5da9efadb60\""
	Aug 16 22:42:37 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:37.180213413Z" level=info msg="TaskExit event &TaskExit{ContainerID:6e961ef114c53f4caadc1d127a23aeb672481fd3e699e195475ad5da9efadb60,ID:6e961ef114c53f4caadc1d127a23aeb672481fd3e699e195475ad5da9efadb60,Pid:6288,ExitStatus:1,ExitedAt:2021-08-16 22:42:37.179527212 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:42:37 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:37.241473350Z" level=info msg="shim disconnected" id=6e961ef114c53f4caadc1d127a23aeb672481fd3e699e195475ad5da9efadb60
	Aug 16 22:42:37 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:37.241863109Z" level=error msg="copy shim log" error="read /proc/self/fd/128: file already closed"
	Aug 16 22:42:37 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:37.563129823Z" level=info msg="CreateContainer within sandbox \"cd35b65dbf056e2a246c7f0327763ece3bf3d994f02dd6b9afd6e03f3ea48b74\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 16 22:42:37 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:37.638635394Z" level=info msg="CreateContainer within sandbox \"cd35b65dbf056e2a246c7f0327763ece3bf3d994f02dd6b9afd6e03f3ea48b74\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2\""
	Aug 16 22:42:37 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:37.647762432Z" level=info msg="StartContainer for \"f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2\""
	Aug 16 22:42:38 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:38.031309589Z" level=info msg="StartContainer for \"f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2\" returns successfully"
	Aug 16 22:42:38 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:38.069992253Z" level=info msg="Finish piping stderr of container \"f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2\""
	Aug 16 22:42:38 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:38.070481010Z" level=info msg="Finish piping stdout of container \"f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2\""
	Aug 16 22:42:38 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:38.073080983Z" level=info msg="TaskExit event &TaskExit{ContainerID:f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2,ID:f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2,Pid:6358,ExitStatus:1,ExitedAt:2021-08-16 22:42:38.072430716 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:42:38 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:38.129822243Z" level=info msg="shim disconnected" id=f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2
	Aug 16 22:42:38 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:38.129963592Z" level=error msg="copy shim log" error="read /proc/self/fd/128: file already closed"
	Aug 16 22:42:38 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:38.574733472Z" level=info msg="RemoveContainer for \"6e961ef114c53f4caadc1d127a23aeb672481fd3e699e195475ad5da9efadb60\""
	Aug 16 22:42:38 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:38.585802037Z" level=info msg="RemoveContainer for \"6e961ef114c53f4caadc1d127a23aeb672481fd3e699e195475ad5da9efadb60\" returns successfully"
	Aug 16 22:42:39 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:39.084909344Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:42:39 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:39.089354034Z" level=info msg="trying next host" error="failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" host=fake.domain
	Aug 16 22:42:39 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:39.094999774Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Aug 16 22:42:55 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:55.144867032Z" level=info msg="TaskExit event &TaskExit{ContainerID:89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2,ID:89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2,Pid:6023,ExitStatus:255,ExitedAt:2021-08-16 22:42:55.144233318 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:42:55 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:55.144894862Z" level=info msg="Finish piping stderr of container \"89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2\""
	Aug 16 22:42:55 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:55.145828833Z" level=info msg="Finish piping stdout of container \"89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2\""
	Aug 16 22:42:55 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:55.211075164Z" level=info msg="shim disconnected" id=89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2
	Aug 16 22:42:55 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:55.211392176Z" level=error msg="copy shim log" error="read /proc/self/fd/122: file already closed"
	
	* 
	* ==> coredns [c265ff52803b42d9b0ff6036d0ca9fe4b655c9875474baa9a0a3f7255d230e17] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = ef4aca0642b1bd212f9628ab01cc3780
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> coredns [e0bb5a872be5dee5a28ce3431d7728ab3b6d152dacd42c354ec65b03076e30d0] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +3.433315] systemd-fstab-generator[1162]: Ignoring "noauto" for root device
	[  +0.033608] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.953752] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1722 comm=systemd-network
	[  +0.655674] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.304031] vboxguest: loading out-of-tree module taints kernel.
	[  +0.007344] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.886202] systemd-fstab-generator[2057]: Ignoring "noauto" for root device
	[  +0.150635] systemd-fstab-generator[2070]: Ignoring "noauto" for root device
	[  +0.197543] systemd-fstab-generator[2100]: Ignoring "noauto" for root device
	[  +6.271369] systemd-fstab-generator[2299]: Ignoring "noauto" for root device
	[ +14.991038] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.035081] kauditd_printk_skb: 98 callbacks suppressed
	[  +9.853332] kauditd_printk_skb: 41 callbacks suppressed
	[Aug16 22:38] kauditd_printk_skb: 2 callbacks suppressed
	[Aug16 22:39] NFSD: Unable to end grace period: -110
	[Aug16 22:41] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.186651] systemd-fstab-generator[4557]: Ignoring "noauto" for root device
	[Aug16 22:42] systemd-fstab-generator[4945]: Ignoring "noauto" for root device
	[ +13.715349] kauditd_printk_skb: 68 callbacks suppressed
	[  +5.284900] kauditd_printk_skb: 47 callbacks suppressed
	[ +10.335714] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.161347] systemd-fstab-generator[6406]: Ignoring "noauto" for root device
	[  +0.769765] systemd-fstab-generator[6462]: Ignoring "noauto" for root device
	[  +0.957588] systemd-fstab-generator[6516]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [1e3071c5b1c506432365e271b588aadbfd2eda919a23dcea85d60022046acfb4] <==
	* {"level":"info","ts":"2021-08-16T22:41:59.934Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"d79850b30a557227","local-member-id":"69ff0102d0d103a7","added-peer-id":"69ff0102d0d103a7","added-peer-peer-urls":["https://192.168.116.66:2380"]}
	{"level":"info","ts":"2021-08-16T22:41:59.934Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"69ff0102d0d103a7","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2021-08-16T22:41:59.947Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-08-16T22:41:59.950Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"69ff0102d0d103a7","initial-advertise-peer-urls":["https://192.168.116.66:2380"],"listen-peer-urls":["https://192.168.116.66:2380"],"advertise-client-urls":["https://192.168.116.66:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.116.66:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-08-16T22:41:59.950Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.116.66:2380"}
	{"level":"info","ts":"2021-08-16T22:41:59.951Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.116.66:2380"}
	{"level":"info","ts":"2021-08-16T22:41:59.950Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-08-16T22:42:00.397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"69ff0102d0d103a7 is starting a new election at term 1"}
	{"level":"info","ts":"2021-08-16T22:42:00.398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"69ff0102d0d103a7 became pre-candidate at term 1"}
	{"level":"info","ts":"2021-08-16T22:42:00.398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"69ff0102d0d103a7 received MsgPreVoteResp from 69ff0102d0d103a7 at term 1"}
	{"level":"info","ts":"2021-08-16T22:42:00.398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"69ff0102d0d103a7 became candidate at term 2"}
	{"level":"info","ts":"2021-08-16T22:42:00.398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"69ff0102d0d103a7 received MsgVoteResp from 69ff0102d0d103a7 at term 2"}
	{"level":"info","ts":"2021-08-16T22:42:00.399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"69ff0102d0d103a7 became leader at term 2"}
	{"level":"info","ts":"2021-08-16T22:42:00.399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 69ff0102d0d103a7 elected leader 69ff0102d0d103a7 at term 2"}
	{"level":"info","ts":"2021-08-16T22:42:00.400Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-16T22:42:00.403Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"d79850b30a557227","local-member-id":"69ff0102d0d103a7","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-16T22:42:00.403Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-16T22:42:00.403Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-16T22:42:00.403Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"69ff0102d0d103a7","local-member-attributes":"{Name:no-preload-20210816223156-6986 ClientURLs:[https://192.168.116.66:2379]}","request-path":"/0/members/69ff0102d0d103a7/attributes","cluster-id":"d79850b30a557227","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-16T22:42:00.404Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-16T22:42:00.406Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.116.66:2379"}
	{"level":"info","ts":"2021-08-16T22:42:00.404Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-16T22:42:00.408Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-08-16T22:42:00.404Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-16T22:42:00.411Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  22:43:11 up 6 min,  0 users,  load average: 1.82, 1.15, 0.53
	Linux no-preload-20210816223156-6986 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [f7e4e0952db6f386f632efe7b9c51f084992c1ef01a488ff933fd58c8c0522f0] <==
	* W0816 22:42:06.621634       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.116.66]
	I0816 22:42:06.624164       1 controller.go:611] quota admission added evaluator for: endpoints
	I0816 22:42:06.639939       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0816 22:42:07.481254       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0816 22:42:07.841293       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 22:42:08.651804       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0816 22:42:08.783499       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0816 22:42:20.985870       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0816 22:42:21.207926       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	W0816 22:42:27.899512       1 handler_proxy.go:104] no RequestInfo found in the context
	E0816 22:42:27.900032       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0816 22:42:27.900400       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0816 22:42:55.114125       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0816 22:42:55.115058       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:42:55.117119       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:42:55.118501       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0816 22:42:55.120525       1 trace.go:205] Trace[86075017]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:eedcda67-5961-494e-a63e-0cfbf728f1d3,client:192.168.116.66,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:42:45.122) (total time: 9997ms):
	Trace[86075017]: [9.997613115s] [9.997613115s] END
	E0816 22:42:55.122211       1 timeout.go:135] post-timeout activity - time-elapsed: 7.883271ms, GET "/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath" result: <nil>
	I0816 22:43:11.746254       1 trace.go:205] Trace[1293409053]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:500,continue: (16-Aug-2021 22:43:00.315) (total time: 11431ms):
	Trace[1293409053]: [11.431176415s] [11.431176415s] END
	E0816 22:43:11.746425       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.Error{e:(*status.Status)(0xc0104143c0)}: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout
	I0816 22:43:11.747456       1 trace.go:205] Trace[792271627]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:33cccb9a-b0ab-4655-a83b-ae30d8c842a5,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (16-Aug-2021 22:43:00.314) (total time: 11432ms):
	Trace[792271627]: [11.432434492s] [11.432434492s] END
	
	* 
	* ==> kube-controller-manager [a5a4025cb261585737124bf5b619554e77896d6c93458ee1baf7d4a451c195db] <==
	* I0816 22:42:21.414378       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0816 22:42:21.472035       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0816 22:42:21.796891       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
	I0816 22:42:21.819223       1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-vf8l9"
	I0816 22:42:25.328671       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-7c784ccb57 to 1"
	I0816 22:42:25.419493       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-7c784ccb57-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0816 22:42:25.503482       1 replica_set.go:536] sync "kube-system/metrics-server-7c784ccb57" failed with pods "metrics-server-7c784ccb57-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	I0816 22:42:25.582064       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-dfjww"
	I0816 22:42:25.759031       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0816 22:42:26.271817       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0816 22:42:26.304758       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:42:26.351361       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:26.357469       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	I0816 22:42:26.382694       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:42:26.406104       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:42:26.413110       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:26.414233       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:42:26.453908       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:26.454671       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:42:26.465800       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:26.466791       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:42:26.566058       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-4vvg9"
	I0816 22:42:26.609058       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-d2v4k"
	E0816 22:42:51.034345       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:42:51.526952       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5] <==
	* I0816 22:42:23.478842       1 node.go:172] Successfully retrieved node IP: 192.168.116.66
	I0816 22:42:23.479012       1 server_others.go:140] Detected node IP 192.168.116.66
	W0816 22:42:23.479041       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	W0816 22:42:23.557097       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0816 22:42:23.567700       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0816 22:42:23.567732       1 server_others.go:212] Using iptables Proxier.
	I0816 22:42:23.592778       1 server.go:649] Version: v1.22.0-rc.0
	I0816 22:42:23.645408       1 config.go:315] Starting service config controller
	I0816 22:42:23.645532       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:42:23.899540       1 config.go:224] Starting endpoint slice config controller
	I0816 22:42:23.958495       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0816 22:42:24.029672       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	E0816 22:42:24.013778       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"no-preload-20210816223156-6986.169beab2be77fb1e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03ed853e67496f1, ext:395359689, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-no-preload-20210816223156-6986", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:
"no-preload-20210816223156-6986", UID:"no-preload-20210816223156-6986", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "no-preload-20210816223156-6986.169beab2be77fb1e" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0816 22:42:24.046205       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [cb12185bdacc363d50b453691deee789994d7ad81788b6df833ae986a27ce50a] <==
	* E0816 22:42:04.578678       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:42:04.579519       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 22:42:04.579900       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:42:04.580217       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:42:04.581045       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:42:04.581266       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:42:04.581425       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:42:04.582271       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:42:05.389984       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:42:05.425427       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:42:05.503074       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:42:05.503126       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:42:05.612892       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:42:05.633869       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:42:05.891862       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:42:05.917039       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:42:05.931237       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:42:05.936662       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:42:05.954630       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 22:42:05.968275       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:42:05.970200       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:42:06.060908       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:42:07.811804       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0816 22:42:07.811916       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0816 22:42:08.039434       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:37:06 UTC, end at Mon 2021-08-16 22:43:12 UTC. --
	Aug 16 22:42:30 no-preload-20210816223156-6986 kubelet[4954]: W0816 22:42:30.003988    4954 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/bbe6012e-b47a-4f77-a534-7acc694f08ee/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 16 22:42:30 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:30.004221    4954 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe6012e-b47a-4f77-a534-7acc694f08ee-config-volume" (OuterVolumeSpecName: "config-volume") pod "bbe6012e-b47a-4f77-a534-7acc694f08ee" (UID: "bbe6012e-b47a-4f77-a534-7acc694f08ee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 16 22:42:30 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:30.018455    4954 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbe6012e-b47a-4f77-a534-7acc694f08ee-kube-api-access-jvblx" (OuterVolumeSpecName: "kube-api-access-jvblx") pod "bbe6012e-b47a-4f77-a534-7acc694f08ee" (UID: "bbe6012e-b47a-4f77-a534-7acc694f08ee"). InnerVolumeSpecName "kube-api-access-jvblx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 16 22:42:30 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:30.104268    4954 reconciler.go:319] "Volume detached for volume \"kube-api-access-jvblx\" (UniqueName: \"kubernetes.io/projected/bbe6012e-b47a-4f77-a534-7acc694f08ee-kube-api-access-jvblx\") on node \"no-preload-20210816223156-6986\" DevicePath \"\""
	Aug 16 22:42:30 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:30.104633    4954 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbe6012e-b47a-4f77-a534-7acc694f08ee-config-volume\") on node \"no-preload-20210816223156-6986\" DevicePath \"\""
	Aug 16 22:42:30 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:30.366497    4954 scope.go:110] "RemoveContainer" containerID="f8944f430090a618857b0168c95cd98f593e9aa65e598d7470d08e1c976f6ef8"
	Aug 16 22:42:30 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:30.463314    4954 scope.go:110] "RemoveContainer" containerID="f8944f430090a618857b0168c95cd98f593e9aa65e598d7470d08e1c976f6ef8"
	Aug 16 22:42:30 no-preload-20210816223156-6986 kubelet[4954]: E0816 22:42:30.464312    4954 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8944f430090a618857b0168c95cd98f593e9aa65e598d7470d08e1c976f6ef8\": not found" containerID="f8944f430090a618857b0168c95cd98f593e9aa65e598d7470d08e1c976f6ef8"
	Aug 16 22:42:30 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:30.464542    4954 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:f8944f430090a618857b0168c95cd98f593e9aa65e598d7470d08e1c976f6ef8} err="failed to get container status \"f8944f430090a618857b0168c95cd98f593e9aa65e598d7470d08e1c976f6ef8\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8944f430090a618857b0168c95cd98f593e9aa65e598d7470d08e1c976f6ef8\": not found"
	Aug 16 22:42:32 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:32.179007    4954 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=bbe6012e-b47a-4f77-a534-7acc694f08ee path="/var/lib/kubelet/pods/bbe6012e-b47a-4f77-a534-7acc694f08ee/volumes"
	Aug 16 22:42:34 no-preload-20210816223156-6986 kubelet[4954]: E0816 22:42:34.489789    4954 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/podedacd358-46da-4db4-a8db-098f6edefb76\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:42:37 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:37.538425    4954 scope.go:110] "RemoveContainer" containerID="6e961ef114c53f4caadc1d127a23aeb672481fd3e699e195475ad5da9efadb60"
	Aug 16 22:42:38 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:38.547988    4954 scope.go:110] "RemoveContainer" containerID="6e961ef114c53f4caadc1d127a23aeb672481fd3e699e195475ad5da9efadb60"
	Aug 16 22:42:38 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:38.548504    4954 scope.go:110] "RemoveContainer" containerID="f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2"
	Aug 16 22:42:38 no-preload-20210816223156-6986 kubelet[4954]: E0816 22:42:38.561005    4954 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-d2v4k_kubernetes-dashboard(d2a31ab1-304a-4179-9e46-8625b64d8dc4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-d2v4k" podUID=d2a31ab1-304a-4179-9e46-8625b64d8dc4
	Aug 16 22:42:39 no-preload-20210816223156-6986 kubelet[4954]: E0816 22:42:39.095656    4954 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:42:39 no-preload-20210816223156-6986 kubelet[4954]: E0816 22:42:39.095775    4954 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:42:39 no-preload-20210816223156-6986 kubelet[4954]: E0816 22:42:39.095933    4954 kuberuntime_manager.go:895] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-qhpdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{
Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vol
umeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-dfjww_kube-system(7a744b20-6d7f-4001-a322-7e5615cbf15f): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:42:39 no-preload-20210816223156-6986 kubelet[4954]: E0816 22:42:39.095982    4954 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-dfjww" podUID=7a744b20-6d7f-4001-a322-7e5615cbf15f
	Aug 16 22:42:39 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:39.553372    4954 scope.go:110] "RemoveContainer" containerID="f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2"
	Aug 16 22:42:39 no-preload-20210816223156-6986 kubelet[4954]: E0816 22:42:39.554186    4954 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-d2v4k_kubernetes-dashboard(d2a31ab1-304a-4179-9e46-8625b64d8dc4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-d2v4k" podUID=d2a31ab1-304a-4179-9e46-8625b64d8dc4
	Aug 16 22:42:42 no-preload-20210816223156-6986 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:42:42 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:42.979034    4954 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 16 22:42:42 no-preload-20210816223156-6986 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:42:42 no-preload-20210816223156-6986 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476] <==
	* 2021/08/16 22:42:30 Using namespace: kubernetes-dashboard
	2021/08/16 22:42:30 Using in-cluster config to connect to apiserver
	2021/08/16 22:42:30 Using secret token for csrf signing
	2021/08/16 22:42:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/16 22:42:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/16 22:42:30 Successful initial request to the apiserver, version: v1.22.0-rc.0
	2021/08/16 22:42:30 Generating JWE encryption key
	2021/08/16 22:42:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/16 22:42:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/16 22:42:31 Initializing JWE encryption key from synchronized object
	2021/08/16 22:42:31 Creating in-cluster Sidecar client
	2021/08/16 22:42:31 Serving insecurely on HTTP port: 9090
	2021/08/16 22:42:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:42:30 Starting overwatch
	
	* 
	* ==> storage-provisioner [89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 89 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc0003e6ad0, 0x0)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc0003e6ac0)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc000416420, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc000372c80, 0x18e5530, 0xc0003e6c80, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0001a87c0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0001a87c0, 0x18b3d60, 0xc0001c0210, 0x1, 0xc000090e40)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0001a87c0, 0x3b9aca00, 0x0, 0x1, 0xc000090e40)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0001a87c0, 0x3b9aca00, 0xc000090e40)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:43:11.754815   19774 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout
	 output: "\n** stderr ** \nError from server: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210816223156-6986 -n no-preload-20210816223156-6986
E0816 22:43:13.448059    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/flannel-20210816222225-6986/client.crt: no such file or directory
E0816 22:43:20.861895    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816222225-6986/client.crt: no such file or directory
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210816223156-6986 -n no-preload-20210816223156-6986: exit status 2 (16.09406206s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:43:28.406858   19803 status.go:422] Error apiserver status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-20210816223156-6986 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p no-preload-20210816223156-6986 logs -n 25: exit status 110 (1m1.354357014s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| start   | -p flannel-20210816222225-6986                    | flannel-20210816222225-6986                    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:31:06 UTC | Mon, 16 Aug 2021 22:33:13 UTC |
	|         | --memory=2048                                     |                                                |         |         |                               |                               |
	|         | --alsologtostderr                                 |                                                |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                                |         |         |                               |                               |
	|         | --cni=flannel --driver=kvm2                       |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	| ssh     | -p flannel-20210816222225-6986                    | flannel-20210816222225-6986                    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:33:18 UTC | Mon, 16 Aug 2021 22:33:18 UTC |
	|         | pgrep -a kubelet                                  |                                                |         |         |                               |                               |
	| start   | -p bridge-20210816222225-6986                     | bridge-20210816222225-6986                     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:31:52 UTC | Mon, 16 Aug 2021 22:33:19 UTC |
	|         | --memory=2048                                     |                                                |         |         |                               |                               |
	|         | --alsologtostderr                                 |                                                |         |         |                               |                               |
	|         | --wait=true --wait-timeout=5m                     |                                                |         |         |                               |                               |
	|         | --cni=bridge --driver=kvm2                        |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	| ssh     | -p bridge-20210816222225-6986                     | bridge-20210816222225-6986                     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:33:20 UTC | Mon, 16 Aug 2021 22:33:20 UTC |
	|         | pgrep -a kubelet                                  |                                                |         |         |                               |                               |
	| delete  | -p flannel-20210816222225-6986                    | flannel-20210816222225-6986                    | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:33:31 UTC | Mon, 16 Aug 2021 22:33:33 UTC |
	| delete  | -p bridge-20210816222225-6986                     | bridge-20210816222225-6986                     | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:17 UTC | Mon, 16 Aug 2021 22:34:18 UTC |
	| delete  | -p                                                | disable-driver-mounts-20210816223418-6986      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:18 UTC | Mon, 16 Aug 2021 22:34:18 UTC |
	|         | disable-driver-mounts-20210816223418-6986         |                                                |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:31:54 UTC | Mon, 16 Aug 2021 22:34:34 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:44 UTC | Mon, 16 Aug 2021 22:34:45 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816223156-6986                 | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:31:56 UTC | Mon, 16 Aug 2021 22:35:04 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:33:33 UTC | Mon, 16 Aug 2021 22:35:08 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:16 UTC | Mon, 16 Aug 2021 22:35:17 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:19 UTC | Mon, 16 Aug 2021 22:35:20 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:18 UTC | Mon, 16 Aug 2021 22:35:42 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:51 UTC | Mon, 16 Aug 2021 22:35:52 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:45 UTC | Mon, 16 Aug 2021 22:36:18 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:18 UTC | Mon, 16 Aug 2021 22:36:18 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:17 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:20 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:52 UTC | Mon, 16 Aug 2021 22:37:25 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:37:25 UTC | Mon, 16 Aug 2021 22:37:25 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816223156-6986                 | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:42:31 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:42:42 UTC | Mon, 16 Aug 2021 22:42:42 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:37:25
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:37:25.306577   19204 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:37:25.306653   19204 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:37:25.306656   19204 out.go:311] Setting ErrFile to fd 2...
	I0816 22:37:25.306663   19204 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:37:25.307072   19204 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:37:25.307547   19204 out.go:305] Setting JSON to false
	I0816 22:37:25.351342   19204 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":4807,"bootTime":1629148638,"procs":188,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:37:25.351461   19204 start.go:121] virtualization: kvm guest
	I0816 22:37:25.353955   19204 out.go:177] * [default-k8s-different-port-20210816223418-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:37:25.355393   19204 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:37:25.354127   19204 notify.go:169] Checking for updates...
	I0816 22:37:25.356781   19204 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:37:25.358158   19204 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:37:25.364678   19204 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:37:25.365267   19204 config.go:177] Loaded profile config "default-k8s-different-port-20210816223418-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:37:25.365899   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:37:25.365956   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:37:25.381650   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46493
	I0816 22:37:25.382065   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:37:25.382798   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:37:25.382820   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:37:25.383330   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:37:25.383519   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:25.383721   19204 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:37:25.384192   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:37:25.384260   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:37:25.401082   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44899
	I0816 22:37:25.402507   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:37:25.403115   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:37:25.403179   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:37:25.403663   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:37:25.403903   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:25.439751   19204 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 22:37:25.439781   19204 start.go:278] selected driver: kvm2
	I0816 22:37:25.439788   19204 start.go:751] validating driver "kvm2" against &{Name:default-k8s-different-port-20210816223418-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210816223418-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kube
let:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:37:25.439905   19204 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 22:37:25.441282   19204 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:37:25.441453   19204 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 22:37:25.455762   19204 install.go:137] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2 version is 1.22.0
	I0816 22:37:25.456183   19204 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 22:37:25.456219   19204 cni.go:93] Creating CNI manager for ""
	I0816 22:37:25.456234   19204 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:37:25.456245   19204 start_flags.go:277] config:
	{Name:default-k8s-different-port-20210816223418-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-2021
0816223418-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop
:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:37:25.456384   19204 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:37:25.458420   19204 out.go:177] * Starting control plane node default-k8s-different-port-20210816223418-6986 in cluster default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.458447   19204 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:37:25.458480   19204 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0816 22:37:25.458495   19204 cache.go:56] Caching tarball of preloaded images
	I0816 22:37:25.458602   19204 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 22:37:25.458622   19204 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0816 22:37:25.458779   19204 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/config.json ...
	I0816 22:37:25.459003   19204 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:37:25.459033   19204 start.go:313] acquiring machines lock for default-k8s-different-port-20210816223418-6986: {Name:mk808edd60d1305a42bb85791729eff4573dbb15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 22:37:25.459101   19204 start.go:317] acquired machines lock for "default-k8s-different-port-20210816223418-6986" in 48.071µs
	I0816 22:37:25.459123   19204 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:37:25.459131   19204 fix.go:55] fixHost starting: 
	I0816 22:37:25.459569   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:37:25.459614   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:37:25.473634   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44349
	I0816 22:37:25.474153   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:37:25.474765   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:37:25.474786   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:37:25.475205   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:37:25.475409   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:25.475621   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetState
	I0816 22:37:25.479447   19204 fix.go:108] recreateIfNeeded on default-k8s-different-port-20210816223418-6986: state=Stopped err=<nil>
	I0816 22:37:25.479498   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	W0816 22:37:25.479660   19204 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:37:21.322104   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:21.822129   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:22.321669   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:22.821492   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:23.322452   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:23.822419   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:24.322141   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:24.821615   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:24.856062   18923 api_server.go:70] duration metric: took 8.045517198s to wait for apiserver process to appear ...
	I0816 22:37:24.856091   18923 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:37:24.856103   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:24.856734   18923 api_server.go:255] stopped: https://192.168.116.66:8443/healthz: Get "https://192.168.116.66:8443/healthz": dial tcp 192.168.116.66:8443: connect: connection refused
	I0816 22:37:25.357442   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:22.382628   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Getting to WaitForSSH function...
	I0816 22:37:22.388062   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:22.388472   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:22.388501   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:22.388736   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Using SSH client type: external
	I0816 22:37:22.388774   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa (-rw-------)
	I0816 22:37:22.388825   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.105.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:37:22.388851   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | About to run SSH command:
	I0816 22:37:22.388868   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | exit 0
	I0816 22:37:23.527862   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | SSH cmd err, output: <nil>: 
	I0816 22:37:23.528297   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetConfigRaw
	I0816 22:37:23.529175   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetIP
	I0816 22:37:23.535445   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.535831   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.535862   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.536325   18929 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/config.json ...
	I0816 22:37:23.536603   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:23.536838   18929 machine.go:88] provisioning docker machine ...
	I0816 22:37:23.536860   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:23.537120   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetMachineName
	I0816 22:37:23.537298   18929 buildroot.go:166] provisioning hostname "embed-certs-20210816223333-6986"
	I0816 22:37:23.537328   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetMachineName
	I0816 22:37:23.537497   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:23.543084   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.543520   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.543560   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.543770   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:23.543953   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:23.544122   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:23.544284   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:23.544470   18929 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:23.544676   18929 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.105.129 22 <nil> <nil>}
	I0816 22:37:23.544698   18929 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20210816223333-6986 && echo "embed-certs-20210816223333-6986" | sudo tee /etc/hostname
	I0816 22:37:23.682935   18929 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20210816223333-6986
	
	I0816 22:37:23.682982   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:23.689555   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.690034   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.690071   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.690297   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:23.690526   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:23.690738   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:23.690910   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:23.691116   18929 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:23.691321   18929 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.105.129 22 <nil> <nil>}
	I0816 22:37:23.691351   18929 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20210816223333-6986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210816223333-6986/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20210816223333-6986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:37:23.826330   18929 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:37:23.826357   18929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:37:23.826393   18929 buildroot.go:174] setting up certificates
	I0816 22:37:23.826403   18929 provision.go:83] configureAuth start
	I0816 22:37:23.826415   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetMachineName
	I0816 22:37:23.826673   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetIP
	I0816 22:37:23.832833   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.833221   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.833252   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.833505   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:23.839058   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.839437   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.839468   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.839721   18929 provision.go:138] copyHostCerts
	I0816 22:37:23.839785   18929 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:37:23.839801   18929 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:37:23.839858   18929 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:37:23.840010   18929 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:37:23.840023   18929 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:37:23.840050   18929 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:37:23.840148   18929 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:37:23.840160   18929 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:37:23.840181   18929 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
	I0816 22:37:23.840251   18929 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20210816223333-6986 san=[192.168.105.129 192.168.105.129 localhost 127.0.0.1 minikube embed-certs-20210816223333-6986]
	I0816 22:37:24.071276   18929 provision.go:172] copyRemoteCerts
	I0816 22:37:24.071347   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:37:24.071383   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.077584   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.078065   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.078133   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.078307   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.078500   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.078636   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.078743   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:37:24.168996   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:37:24.190581   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0816 22:37:24.211894   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:37:24.234970   18929 provision.go:86] duration metric: configureAuth took 408.533613ms
	I0816 22:37:24.235001   18929 buildroot.go:189] setting minikube options for container-runtime
	I0816 22:37:24.235282   18929 config.go:177] Loaded profile config "embed-certs-20210816223333-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:37:24.235303   18929 machine.go:91] provisioned docker machine in 698.450664ms
	I0816 22:37:24.235313   18929 start.go:267] post-start starting for "embed-certs-20210816223333-6986" (driver="kvm2")
	I0816 22:37:24.235321   18929 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:37:24.235352   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.235711   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:37:24.235748   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.242219   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.242647   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.242677   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.242968   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.243197   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.243376   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.243542   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:37:24.342244   18929 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:37:24.348430   18929 info.go:137] Remote host: Buildroot 2020.02.12
	I0816 22:37:24.348458   18929 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:37:24.348527   18929 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:37:24.348678   18929 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0816 22:37:24.348794   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:37:24.358370   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:37:24.378832   18929 start.go:270] post-start completed in 143.493882ms
	I0816 22:37:24.378891   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.379183   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.385172   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.385565   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.385596   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.385720   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.385936   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.386069   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.386238   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.386404   18929 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:24.386604   18929 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.105.129 22 <nil> <nil>}
	I0816 22:37:24.386621   18929 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0816 22:37:24.513150   18929 main.go:130] libmachine: SSH cmd err, output: <nil>: 1629153444.435910196
	
	I0816 22:37:24.513175   18929 fix.go:212] guest clock: 1629153444.435910196
	I0816 22:37:24.513185   18929 fix.go:225] Guest: 2021-08-16 22:37:24.435910196 +0000 UTC Remote: 2021-08-16 22:37:24.379164096 +0000 UTC m=+28.470229855 (delta=56.7461ms)
	I0816 22:37:24.513209   18929 fix.go:196] guest clock delta is within tolerance: 56.7461ms
	I0816 22:37:24.513220   18929 fix.go:57] fixHost completed within 14.813246061s
	I0816 22:37:24.513226   18929 start.go:80] releasing machines lock for "embed-certs-20210816223333-6986", held for 14.813280431s
	I0816 22:37:24.513267   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.513532   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetIP
	I0816 22:37:24.519703   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.520118   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.520149   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.520319   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.520528   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.521062   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.521300   18929 ssh_runner.go:149] Run: systemctl --version
	I0816 22:37:24.521326   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.521364   18929 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:37:24.521406   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.527844   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.527923   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.528257   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.528281   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.528308   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.528323   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.528556   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.528678   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.528724   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.528933   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.528943   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.529108   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.529179   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:37:24.529267   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:37:24.634682   18929 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:37:24.634891   18929 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:37:24.131199   18635 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:37:24.131267   18635 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:37:24.140028   18635 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:37:24.157600   18635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:37:24.171359   18635 system_pods.go:59] 8 kube-system pods found
	I0816 22:37:24.171398   18635 system_pods.go:61] "coredns-fb8b8dccf-qwcrg" [fd98f945-fee1-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171407   18635 system_pods.go:61] "etcd-old-k8s-version-20210816223154-6986" [1d77612e-fee2-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171414   18635 system_pods.go:61] "kube-apiserver-old-k8s-version-20210816223154-6986" [152107a2-fee2-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171420   18635 system_pods.go:61] "kube-controller-manager-old-k8s-version-20210816223154-6986" [8620a0da-fee2-11eb-b5b6-525400bf2371] Pending
	I0816 22:37:24.171426   18635 system_pods.go:61] "kube-proxy-nvb2s" [fdaa2b42-fee1-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171438   18635 system_pods.go:61] "kube-scheduler-old-k8s-version-20210816223154-6986" [1b1505e6-fee2-11eb-bb5b-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:37:24.171454   18635 system_pods.go:61] "metrics-server-8546d8b77b-gl6jr" [28801d4e-fee2-11eb-bb5b-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:37:24.171462   18635 system_pods.go:61] "storage-provisioner" [ff1e11f1-fee1-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171469   18635 system_pods.go:74] duration metric: took 13.840978ms to wait for pod list to return data ...
	I0816 22:37:24.171481   18635 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:37:24.176303   18635 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:37:24.176347   18635 node_conditions.go:123] node cpu capacity is 2
	I0816 22:37:24.176360   18635 node_conditions.go:105] duration metric: took 4.872863ms to run NodePressure ...
	I0816 22:37:24.176376   18635 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:25.292041   18635 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.115642082s)
	I0816 22:37:25.292077   18635 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:37:25.325547   18635 kubeadm.go:746] kubelet initialised
	I0816 22:37:25.325574   18635 kubeadm.go:747] duration metric: took 33.485813ms waiting for restarted kubelet to initialise ...
	I0816 22:37:25.325590   18635 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:37:25.351142   18635 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:27.387702   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:25.482074   19204 out.go:177] * Restarting existing kvm2 VM for "default-k8s-different-port-20210816223418-6986" ...
	I0816 22:37:25.482104   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Start
	I0816 22:37:25.482316   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Ensuring networks are active...
	I0816 22:37:25.484598   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Ensuring network default is active
	I0816 22:37:25.485014   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Ensuring network mk-default-k8s-different-port-20210816223418-6986 is active
	I0816 22:37:25.485452   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Getting domain xml...
	I0816 22:37:25.487765   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Creating domain...
	I0816 22:37:25.923048   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Waiting to get IP...
	I0816 22:37:25.924065   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.924660   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Found IP for machine: 192.168.50.186
	I0816 22:37:25.924682   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Reserving static IP address...
	I0816 22:37:25.924701   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has current primary IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.925155   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "default-k8s-different-port-20210816223418-6986", mac: "52:54:00:ed:d4:05", ip: "192.168.50.186"} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:34:34 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:25.925187   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | skip adding static IP to network mk-default-k8s-different-port-20210816223418-6986 - found existing host DHCP lease matching {name: "default-k8s-different-port-20210816223418-6986", mac: "52:54:00:ed:d4:05", ip: "192.168.50.186"}
	I0816 22:37:25.925202   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Reserved static IP address: 192.168.50.186
	I0816 22:37:25.925219   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Waiting for SSH to be available...
	I0816 22:37:25.925234   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Getting to WaitForSSH function...
	I0816 22:37:25.930369   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.930660   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:34:34 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:25.930705   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.930802   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Using SSH client type: external
	I0816 22:37:25.930842   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa (-rw-------)
	I0816 22:37:25.930888   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:37:25.931010   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | About to run SSH command:
	I0816 22:37:25.931033   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | exit 0
	I0816 22:37:30.356304   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:37:30.356337   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:37:30.357361   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:30.544479   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:30.544514   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:30.857809   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:30.866881   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:30.866920   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:28.652395   18929 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.017437883s)
	I0816 22:37:28.652577   18929 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0816 22:37:28.652647   18929 ssh_runner.go:149] Run: which lz4
	I0816 22:37:28.657345   18929 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0816 22:37:28.662555   18929 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0816 22:37:28.662584   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (928970367 bytes)
	I0816 22:37:31.357641   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:31.385946   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:31.385974   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:31.857651   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:31.878038   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:31.878070   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:32.357730   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:32.371926   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:32.371954   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:32.857204   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:32.867865   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 200:
	ok
	I0816 22:37:32.881085   18923 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:37:32.881113   18923 api_server.go:129] duration metric: took 8.025015474s to wait for apiserver health ...
	I0816 22:37:32.881124   18923 cni.go:93] Creating CNI manager for ""
	I0816 22:37:32.881132   18923 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:37:29.389763   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:31.391442   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:35.155848   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | SSH cmd err, output: exit status 255: 
	I0816 22:37:35.155882   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0816 22:37:35.155896   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | command : exit 0
	I0816 22:37:35.155905   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | err     : exit status 255
	I0816 22:37:35.155918   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | output  : 
	I0816 22:37:32.883184   18923 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:37:32.883268   18923 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:37:32.927942   18923 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:37:33.011939   18923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:37:33.043009   18923 system_pods.go:59] 8 kube-system pods found
	I0816 22:37:33.043056   18923 system_pods.go:61] "coredns-78fcd69978-nzf79" [a95afe1c-4f93-44a8-b669-b42c72f3500d] Running
	I0816 22:37:33.043064   18923 system_pods.go:61] "etcd-no-preload-20210816223156-6986" [fc40f0e0-16ef-4ba8-b5fd-17f4684d3a13] Running
	I0816 22:37:33.043076   18923 system_pods.go:61] "kube-apiserver-no-preload-20210816223156-6986" [f13df2c8-5aa8-49c3-89c0-b584ff8c62c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 22:37:33.043083   18923 system_pods.go:61] "kube-controller-manager-no-preload-20210816223156-6986" [8b866a1c-d283-4410-acbf-be2dbaa0f025] Running
	I0816 22:37:33.043094   18923 system_pods.go:61] "kube-proxy-64m6s" [fc5086fe-a671-4078-b76c-0c8f0656dca6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:37:33.043108   18923 system_pods.go:61] "kube-scheduler-no-preload-20210816223156-6986" [5db4c302-251a-47dc-90b9-424206ed445d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:37:33.043123   18923 system_pods.go:61] "metrics-server-7c784ccb57-44llk" [319102e5-661e-43bc-9c07-07463f6b1e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:37:33.043129   18923 system_pods.go:61] "storage-provisioner" [3da85640-a722-4ba1-a886-926bcaf81b8e] Running
	I0816 22:37:33.043140   18923 system_pods.go:74] duration metric: took 31.176037ms to wait for pod list to return data ...
	I0816 22:37:33.043149   18923 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:37:33.049500   18923 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:37:33.049531   18923 node_conditions.go:123] node cpu capacity is 2
	I0816 22:37:33.049544   18923 node_conditions.go:105] duration metric: took 6.385759ms to run NodePressure ...
	I0816 22:37:33.049562   18923 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:33.993434   18923 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:37:34.012191   18923 kubeadm.go:746] kubelet initialised
	I0816 22:37:34.012215   18923 kubeadm.go:747] duration metric: took 18.75429ms waiting for restarted kubelet to initialise ...
	I0816 22:37:34.012224   18923 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:37:34.033224   18923 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-nzf79" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:34.059145   18923 pod_ready.go:92] pod "coredns-78fcd69978-nzf79" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:34.059169   18923 pod_ready.go:81] duration metric: took 25.912051ms waiting for pod "coredns-78fcd69978-nzf79" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:34.059183   18923 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:32.660993   18929 containerd.go:546] Took 4.003687 seconds to copy over tarball
	I0816 22:37:32.661054   18929 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 22:37:33.892216   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:36.388385   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:38.156062   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Getting to WaitForSSH function...
	I0816 22:37:38.161988   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:38.162321   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:34:34 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:38.162379   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:38.162468   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Using SSH client type: external
	I0816 22:37:38.162499   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa (-rw-------)
	I0816 22:37:38.162538   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:37:38.162552   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | About to run SSH command:
	I0816 22:37:38.162570   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | exit 0
	I0816 22:37:36.102180   18923 pod_ready.go:102] pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:38.889153   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:41.402823   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | SSH cmd err, output: <nil>: 
	I0816 22:37:41.403283   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetConfigRaw
	I0816 22:37:41.404010   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetIP
	I0816 22:37:41.410017   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.410394   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.410432   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.410693   19204 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/config.json ...
	I0816 22:37:41.410926   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:41.411142   19204 machine.go:88] provisioning docker machine ...
	I0816 22:37:41.411167   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:41.411335   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetMachineName
	I0816 22:37:41.411477   19204 buildroot.go:166] provisioning hostname "default-k8s-different-port-20210816223418-6986"
	I0816 22:37:41.411499   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetMachineName
	I0816 22:37:41.411620   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:41.416760   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.417121   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.417154   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.417291   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:41.417487   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:41.417620   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:41.417769   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:41.417933   19204 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:41.418151   19204 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 22:37:41.418167   19204 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20210816223418-6986 && echo "default-k8s-different-port-20210816223418-6986" | sudo tee /etc/hostname
	I0816 22:37:41.560416   19204 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20210816223418-6986
	
	I0816 22:37:41.560449   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:41.566690   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.567028   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.567064   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.567351   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:41.567542   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:41.567703   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:41.567827   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:41.567996   19204 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:41.568193   19204 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 22:37:41.568221   19204 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20210816223418-6986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20210816223418-6986/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20210816223418-6986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:37:41.743484   19204 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:37:41.743518   19204 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:37:41.743559   19204 buildroot.go:174] setting up certificates
	I0816 22:37:41.743576   19204 provision.go:83] configureAuth start
	I0816 22:37:41.743593   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetMachineName
	I0816 22:37:41.743895   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetIP
	I0816 22:37:41.750014   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.750423   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.750467   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.750809   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:41.756158   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.756536   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.756569   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.756717   19204 provision.go:138] copyHostCerts
	I0816 22:37:41.756789   19204 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:37:41.756799   19204 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:37:41.756862   19204 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:37:41.756962   19204 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:37:41.756972   19204 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:37:41.756994   19204 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:37:41.757071   19204 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:37:41.757082   19204 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:37:41.757102   19204 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
	I0816 22:37:41.757156   19204 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20210816223418-6986 san=[192.168.50.186 192.168.50.186 localhost 127.0.0.1 minikube default-k8s-different-port-20210816223418-6986]
	I0816 22:37:42.356131   19204 provision.go:172] copyRemoteCerts
	I0816 22:37:42.356205   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:37:42.356250   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.362214   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.362513   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.362547   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.362780   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.362992   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.363219   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.363363   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:37:42.482862   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:37:42.512838   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0816 22:37:42.540047   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:37:42.568047   19204 provision.go:86] duration metric: configureAuth took 824.454088ms
	I0816 22:37:42.568077   19204 buildroot.go:189] setting minikube options for container-runtime
	I0816 22:37:42.568300   19204 config.go:177] Loaded profile config "default-k8s-different-port-20210816223418-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:37:42.568315   19204 machine.go:91] provisioned docker machine in 1.157156536s
	I0816 22:37:42.568324   19204 start.go:267] post-start starting for "default-k8s-different-port-20210816223418-6986" (driver="kvm2")
	I0816 22:37:42.568333   19204 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:37:42.568368   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.568715   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:37:42.568749   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.574488   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.574891   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.574928   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.575140   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.575339   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.575523   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.575710   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:37:42.676578   19204 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:37:42.682148   19204 info.go:137] Remote host: Buildroot 2020.02.12
	I0816 22:37:42.682181   19204 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:37:42.682247   19204 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:37:42.682409   19204 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0816 22:37:42.682558   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:37:42.691519   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:37:42.711453   19204 start.go:270] post-start completed in 143.110809ms
	I0816 22:37:42.711496   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.711732   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.718125   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.718511   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.718547   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.718832   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.719063   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.719246   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.719404   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.719588   19204 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:42.719762   19204 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 22:37:42.719775   19204 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0816 22:37:42.864591   19204 main.go:130] libmachine: SSH cmd err, output: <nil>: 1629153462.785763979
	
	I0816 22:37:42.864617   19204 fix.go:212] guest clock: 1629153462.785763979
	I0816 22:37:42.864627   19204 fix.go:225] Guest: 2021-08-16 22:37:42.785763979 +0000 UTC Remote: 2021-08-16 22:37:42.711713193 +0000 UTC m=+17.455762277 (delta=74.050786ms)
	I0816 22:37:42.864651   19204 fix.go:196] guest clock delta is within tolerance: 74.050786ms
	I0816 22:37:42.864660   19204 fix.go:57] fixHost completed within 17.405528602s
	I0816 22:37:42.864666   19204 start.go:80] releasing machines lock for "default-k8s-different-port-20210816223418-6986", held for 17.405551891s
	I0816 22:37:42.864711   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.864961   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetIP
	I0816 22:37:42.871077   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.871460   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.871504   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.871781   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.871990   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.872747   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.873035   19204 ssh_runner.go:149] Run: systemctl --version
	I0816 22:37:42.873067   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.873387   19204 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:37:42.873431   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.881178   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.881737   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.882041   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.882067   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.882095   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.882114   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.882432   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.882476   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.882624   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.882654   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.882754   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.882821   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.882852   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:37:42.882932   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:37:42.983824   19204 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:37:42.983945   19204 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:37:41.792417   18923 pod_ready.go:102] pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:42.110388   18923 pod_ready.go:92] pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:42.110425   18923 pod_ready.go:81] duration metric: took 8.051231395s waiting for pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.110443   18923 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.128769   18923 pod_ready.go:92] pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:42.128789   18923 pod_ready.go:81] duration metric: took 18.337432ms waiting for pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.128804   18923 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.137520   18923 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:42.137541   18923 pod_ready.go:81] duration metric: took 8.728281ms waiting for pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.137554   18923 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-64m6s" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.158798   18923 pod_ready.go:92] pod "kube-proxy-64m6s" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:42.158877   18923 pod_ready.go:81] duration metric: took 21.313805ms waiting for pod "kube-proxy-64m6s" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.158908   18923 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:44.512973   18923 pod_ready.go:102] pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:45.697026   18923 pod_ready.go:92] pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:45.697054   18923 pod_ready.go:81] duration metric: took 3.538123235s waiting for pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:45.697067   18923 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:44.369712   18929 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (11.708626678s)
	I0816 22:37:44.369752   18929 containerd.go:553] Took 11.708733 seconds t extract the tarball
	I0816 22:37:44.369766   18929 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0816 22:37:44.433232   18929 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:37:44.586357   18929 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:37:44.635654   18929 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0816 22:37:44.682553   18929 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0816 22:37:44.697822   18929 docker.go:153] disabling docker service ...
	I0816 22:37:44.697882   18929 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:37:44.709238   18929 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:37:44.720469   18929 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:37:44.857666   18929 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:37:44.991672   18929 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:37:45.005773   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:37:45.020903   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0816 22:37:45.035818   18929 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:37:45.045388   18929 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:37:45.045444   18929 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:37:45.065836   18929 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:37:45.073649   18929 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:37:45.210250   18929 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:37:45.536389   18929 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0816 22:37:45.536468   18929 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:37:45.543940   18929 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0816 22:37:46.648822   18929 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:37:46.654589   18929 start.go:413] Will wait 60s for crictl version
	I0816 22:37:46.654654   18929 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:37:46.687975   18929 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0816 22:37:46.688041   18929 ssh_runner.go:149] Run: containerd --version
	I0816 22:37:46.717960   18929 ssh_runner.go:149] Run: containerd --version
	I0816 22:37:43.671220   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:45.887022   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:47.896514   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:46.994449   19204 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.010481954s)
	I0816 22:37:46.994588   19204 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0816 22:37:46.994677   19204 ssh_runner.go:149] Run: which lz4
	I0816 22:37:46.999431   19204 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0816 22:37:47.004309   19204 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0816 22:37:47.004338   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (928970367 bytes)
	I0816 22:37:47.723452   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:49.727582   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:46.750218   18929 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0816 22:37:46.750266   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetIP
	I0816 22:37:46.755631   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:46.756018   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:46.756051   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:46.756195   18929 ssh_runner.go:149] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0816 22:37:46.760434   18929 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:37:46.770865   18929 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:37:46.770913   18929 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:37:46.804122   18929 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:37:46.804147   18929 containerd.go:517] Images already preloaded, skipping extraction
	I0816 22:37:46.804200   18929 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:37:46.836132   18929 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:37:46.836154   18929 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:37:46.836213   18929 ssh_runner.go:149] Run: sudo crictl info
	I0816 22:37:46.870224   18929 cni.go:93] Creating CNI manager for ""
	I0816 22:37:46.870256   18929 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:37:46.870269   18929 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:37:46.870282   18929 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.129 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20210816223333-6986 NodeName:embed-certs-20210816223333-6986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.105.129 CgroupDriver:cgroupf
s ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:37:46.870401   18929 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20210816223333-6986"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:37:46.870482   18929 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20210816223333-6986 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.105.129 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210816223333-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:37:46.870540   18929 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0816 22:37:46.878703   18929 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:37:46.878775   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:37:46.887763   18929 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (548 bytes)
	I0816 22:37:46.900548   18929 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 22:37:46.911899   18929 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2090 bytes)
	I0816 22:37:46.925412   18929 ssh_runner.go:149] Run: grep 192.168.105.129	control-plane.minikube.internal$ /etc/hosts
	I0816 22:37:46.929442   18929 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:37:46.939989   18929 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986 for IP: 192.168.105.129
	I0816 22:37:46.940054   18929 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:37:46.940073   18929 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:37:46.940143   18929 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/client.key
	I0816 22:37:46.940182   18929 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/apiserver.key.ff3abd74
	I0816 22:37:46.940203   18929 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/proxy-client.key
	I0816 22:37:46.940311   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem (1338 bytes)
	W0816 22:37:46.940364   18929 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
	I0816 22:37:46.940374   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 22:37:46.940398   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:37:46.940419   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:37:46.940453   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1675 bytes)
	I0816 22:37:46.940501   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:37:46.941607   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:37:46.959921   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 22:37:46.977073   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:37:46.995032   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 22:37:47.016388   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:37:47.036886   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 22:37:47.056736   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:37:47.076945   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 22:37:47.096512   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
	I0816 22:37:47.117888   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
	I0816 22:37:47.137952   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:37:47.159313   18929 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:37:47.173334   18929 ssh_runner.go:149] Run: openssl version
	I0816 22:37:47.179650   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:37:47.191486   18929 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:37:47.196524   18929 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:42 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:37:47.196589   18929 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:37:47.204162   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:37:47.214626   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
	I0816 22:37:47.226391   18929 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6986.pem
	I0816 22:37:47.234494   18929 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:49 /usr/share/ca-certificates/6986.pem
	I0816 22:37:47.234558   18929 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
	I0816 22:37:47.242705   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
	I0816 22:37:47.253305   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
	I0816 22:37:47.263502   18929 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/69862.pem
	I0816 22:37:47.268803   18929 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:49 /usr/share/ca-certificates/69862.pem
	I0816 22:37:47.268865   18929 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
	I0816 22:37:47.274964   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:37:47.283354   18929 kubeadm.go:390] StartCluster: {Name:embed-certs-20210816223333-6986 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3
ClusterName:embed-certs-20210816223333-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.129 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] Start
HostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:37:47.283503   18929 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0816 22:37:47.283565   18929 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:37:47.325446   18929 cri.go:76] found id: ""
	I0816 22:37:47.325557   18929 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:37:47.335659   18929 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:37:47.335682   18929 kubeadm.go:600] restartCluster start
	I0816 22:37:47.335733   18929 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:37:47.346292   18929 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:47.347565   18929 kubeconfig.go:117] verify returned: extract IP: "embed-certs-20210816223333-6986" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:37:47.348014   18929 kubeconfig.go:128] "embed-certs-20210816223333-6986" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:37:47.348788   18929 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:37:47.351634   18929 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:37:47.361663   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:47.361718   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:47.374579   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:47.574973   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:47.575059   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:47.589172   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:47.775434   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:47.775507   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:47.788957   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:47.975270   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:47.975360   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:47.989460   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.175680   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.175758   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.191429   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.375697   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.375790   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.386436   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.574665   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.574762   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.589082   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.775443   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.775512   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.791358   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.975634   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.975720   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.988259   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.175437   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.175544   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.190342   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.375596   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.375683   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.389601   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.574808   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.574892   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.585369   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.775000   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.775066   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.787982   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.975134   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.975231   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.986392   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:50.175658   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:50.175750   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:50.188143   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:50.375418   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:50.375514   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:50.387182   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:50.387201   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:50.387249   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:50.397435   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:50.397461   18929 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:37:50.397471   18929 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:37:50.397485   18929 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:37:50.397549   18929 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:37:50.439348   18929 cri.go:76] found id: ""
	I0816 22:37:50.439419   18929 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:37:50.459652   18929 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:37:50.469766   18929 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:37:50.469836   18929 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:37:50.479399   18929 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:37:50.479422   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:50.872420   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:50.387080   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:52.388399   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:53.358602   19204 containerd.go:546] Took 6.359210 seconds to copy over tarball
	I0816 22:37:53.358725   19204 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 22:37:51.735229   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:54.223000   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:52.412541   18929 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.540081052s)
	I0816 22:37:52.412575   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:52.718154   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:52.886875   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:53.025017   18929 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:37:53.025085   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:53.540988   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:54.040437   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:54.541392   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:55.040418   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:55.540381   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:54.887899   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:56.229434   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:58.302035   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:00.733041   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:56.040801   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:56.540669   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:57.040354   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:57.540386   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:58.040333   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:58.540400   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:59.040772   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:59.540444   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:00.041274   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:00.540645   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:00.741760   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:02.887487   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:03.393238   19204 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (10.034485098s)
	I0816 22:38:03.393270   19204 containerd.go:553] Took 10.034612 seconds t extract the tarball
	I0816 22:38:03.393282   19204 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0816 22:38:03.459021   19204 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:38:03.599477   19204 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:38:03.656046   19204 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0816 22:38:03.843112   19204 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0816 22:38:03.858574   19204 docker.go:153] disabling docker service ...
	I0816 22:38:03.858632   19204 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:38:03.872784   19204 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:38:03.886816   19204 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:38:04.029472   19204 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:38:04.164998   19204 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:38:04.176395   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:38:04.190579   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0816 22:38:04.204338   19204 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:38:04.211355   19204 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:38:04.211415   19204 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:38:04.229181   19204 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:38:04.236487   19204 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:38:04.368079   19204 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:38:03.226580   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:05.846484   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:01.040586   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:01.541229   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:02.041014   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:02.540773   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:03.040804   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:03.540654   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:04.041158   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:04.540403   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:05.041212   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:05.540477   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:05.871071   19204 ssh_runner.go:189] Completed: sudo systemctl restart containerd: (1.502953255s)
	I0816 22:38:05.871107   19204 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0816 22:38:05.871162   19204 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:38:05.876672   19204 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0816 22:38:06.981936   19204 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:38:06.987477   19204 start.go:413] Will wait 60s for crictl version
	I0816 22:38:06.987542   19204 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:38:07.019404   19204 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0816 22:38:07.019460   19204 ssh_runner.go:149] Run: containerd --version
	I0816 22:38:07.056241   19204 ssh_runner.go:149] Run: containerd --version
	I0816 22:38:05.841456   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:07.888564   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:07.088137   19204 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0816 22:38:07.088183   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetIP
	I0816 22:38:07.093462   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:38:07.093796   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:38:07.093832   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:38:07.093973   19204 ssh_runner.go:149] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 22:38:07.098921   19204 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:38:07.109221   19204 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:38:07.109293   19204 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:38:07.143575   19204 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:38:07.143601   19204 containerd.go:517] Images already preloaded, skipping extraction
	I0816 22:38:07.143659   19204 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:38:07.174105   19204 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:38:07.174129   19204 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:38:07.174182   19204 ssh_runner.go:149] Run: sudo crictl info
	I0816 22:38:07.212980   19204 cni.go:93] Creating CNI manager for ""
	I0816 22:38:07.213012   19204 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:38:07.213028   19204 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:38:07.213043   19204 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.186 APIServerPort:8444 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20210816223418-6986 NodeName:default-k8s-different-port-20210816223418-6986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.
50.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:38:07.213191   19204 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.186
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20210816223418-6986"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:38:07.213279   19204 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20210816223418-6986 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.186 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210816223418-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0816 22:38:07.213332   19204 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0816 22:38:07.222054   19204 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:38:07.222139   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:38:07.230063   19204 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (562 bytes)
	I0816 22:38:07.244461   19204 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 22:38:07.259892   19204 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0816 22:38:07.274883   19204 ssh_runner.go:149] Run: grep 192.168.50.186	control-plane.minikube.internal$ /etc/hosts
	I0816 22:38:07.280261   19204 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:38:07.293265   19204 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986 for IP: 192.168.50.186
	I0816 22:38:07.293314   19204 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:38:07.293333   19204 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:38:07.293384   19204 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/client.key
	I0816 22:38:07.293423   19204 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/apiserver.key.c5cc0a12
	I0816 22:38:07.293458   19204 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/proxy-client.key
	I0816 22:38:07.293569   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem (1338 bytes)
	W0816 22:38:07.293608   19204 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
	I0816 22:38:07.293618   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 22:38:07.293643   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:38:07.293668   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:38:07.293692   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1675 bytes)
	I0816 22:38:07.293738   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:38:07.294686   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:38:07.314730   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 22:38:07.332358   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:38:07.351920   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 22:38:07.369849   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:38:07.388099   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 22:38:07.406297   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:38:07.425998   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 22:38:07.443687   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:38:07.460832   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
	I0816 22:38:07.481210   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
	I0816 22:38:07.501717   19204 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:38:07.514903   19204 ssh_runner.go:149] Run: openssl version
	I0816 22:38:07.520949   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:38:07.531264   19204 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:38:07.536846   19204 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:42 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:38:07.536898   19204 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:38:07.543551   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:38:07.553322   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
	I0816 22:38:07.563414   19204 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6986.pem
	I0816 22:38:07.568579   19204 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:49 /usr/share/ca-certificates/6986.pem
	I0816 22:38:07.568631   19204 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
	I0816 22:38:07.574828   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
	I0816 22:38:07.582849   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
	I0816 22:38:07.591254   19204 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/69862.pem
	I0816 22:38:07.595981   19204 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:49 /usr/share/ca-certificates/69862.pem
	I0816 22:38:07.596044   19204 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
	I0816 22:38:07.602206   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:38:07.611191   19204 kubeadm.go:390] StartCluster: {Name:default-k8s-different-port-20210816223418-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.21.3 ClusterName:default-k8s-different-port-20210816223418-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_read
y:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:38:07.611272   19204 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0816 22:38:07.611319   19204 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:38:07.643146   19204 cri.go:76] found id: ""
	I0816 22:38:07.643226   19204 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:38:07.650886   19204 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:38:07.650919   19204 kubeadm.go:600] restartCluster start
	I0816 22:38:07.650971   19204 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:38:07.658653   19204 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:07.659605   19204 kubeconfig.go:117] verify returned: extract IP: "default-k8s-different-port-20210816223418-6986" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:38:07.660046   19204 kubeconfig.go:128] "default-k8s-different-port-20210816223418-6986" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:38:07.661820   19204 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:38:07.664797   19204 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:38:07.672378   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:07.672416   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:07.682197   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:07.882615   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:07.882689   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:07.893628   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.082995   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.083063   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.092764   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.283037   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.283112   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.293325   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.482586   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.482681   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.493502   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.682844   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.682915   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.693201   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.882416   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.882491   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.892118   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.082359   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.082457   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.092165   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.282385   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.282459   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.291528   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.482860   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.482930   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.493037   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.682335   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.682408   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.691945   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.883133   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.883193   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.892794   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.083140   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.083233   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.092308   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.223670   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:10.742112   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:06.041308   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:06.540690   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:07.041155   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:07.540839   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:08.040793   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:08.541292   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:09.041388   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:09.540943   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:10.041377   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:10.541237   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:10.386476   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:12.889815   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:10.282796   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.282889   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.292190   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.482261   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.482330   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.491729   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.683104   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.683186   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.693060   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.693079   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.693121   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.701893   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.701916   19204 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:38:10.701925   19204 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:38:10.701938   19204 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:38:10.701989   19204 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:38:10.740433   19204 cri.go:76] found id: ""
	I0816 22:38:10.740501   19204 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:38:10.756485   19204 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:38:10.765450   19204 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:38:10.765507   19204 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:38:10.772477   19204 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:38:10.772499   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:11.017384   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:12.671111   19204 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.653686174s)
	I0816 22:38:12.671155   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:12.947393   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:13.086256   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:13.215447   19204 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:38:13.215508   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:13.731105   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.231119   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.731093   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:15.231319   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:13.224797   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:15.723341   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:11.040800   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:11.540697   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:12.040673   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:12.541181   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:13.041152   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:13.541025   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.041183   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.541230   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.551768   18929 api_server.go:70] duration metric: took 21.526753133s to wait for apiserver process to appear ...
	I0816 22:38:14.551790   18929 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:38:14.551800   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:38:15.386344   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:16.395588   18635 pod_ready.go:92] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.395621   18635 pod_ready.go:81] duration metric: took 51.044447203s waiting for pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.395634   18635 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.408068   18635 pod_ready.go:92] pod "etcd-old-k8s-version-20210816223154-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.408086   18635 pod_ready.go:81] duration metric: took 12.443476ms waiting for pod "etcd-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.408096   18635 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.414488   18635 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20210816223154-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.414507   18635 pod_ready.go:81] duration metric: took 6.402316ms waiting for pod "kube-apiserver-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.414521   18635 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.420281   18635 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20210816223154-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.420300   18635 pod_ready.go:81] duration metric: took 5.769412ms waiting for pod "kube-controller-manager-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.420313   18635 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nvb2s" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.425411   18635 pod_ready.go:92] pod "kube-proxy-nvb2s" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.425430   18635 pod_ready.go:81] duration metric: took 5.109715ms waiting for pod "kube-proxy-nvb2s" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.425440   18635 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.784339   18635 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20210816223154-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.784360   18635 pod_ready.go:81] duration metric: took 358.911908ms waiting for pod "kube-scheduler-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.784371   18635 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:18.553150   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:38:18.553194   18929 api_server.go:101] status: https://192.168.105.129:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:38:19.053887   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:38:19.071151   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:38:19.071179   18929 api_server.go:101] status: https://192.168.105.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:38:19.553619   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:38:19.561382   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:38:19.561406   18929 api_server.go:101] status: https://192.168.105.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:38:20.053341   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:38:20.061527   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 200:
	ok
	I0816 22:38:20.069537   18929 api_server.go:139] control plane version: v1.21.3
	I0816 22:38:20.069560   18929 api_server.go:129] duration metric: took 5.517764917s to wait for apiserver health ...
	I0816 22:38:20.069572   18929 cni.go:93] Creating CNI manager for ""
	I0816 22:38:20.069581   18929 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:38:15.731207   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:16.231247   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:16.731268   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:17.230730   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:17.730956   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:18.231458   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:18.730950   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:19.230879   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:19.730819   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:20.230563   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:20.243853   19204 api_server.go:70] duration metric: took 7.028407985s to wait for apiserver process to appear ...
	I0816 22:38:20.243876   19204 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:38:20.243887   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:18.225200   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:20.243220   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:20.071659   18929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:38:20.071738   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:38:20.084719   18929 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:38:20.113939   18929 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:38:20.132494   18929 system_pods.go:59] 8 kube-system pods found
	I0816 22:38:20.132598   18929 system_pods.go:61] "coredns-558bd4d5db-jq6bb" [c088e8ae-638c-449f-b206-10b016f707f4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:38:20.132622   18929 system_pods.go:61] "etcd-embed-certs-20210816223333-6986" [350ff095-f45d-4c87-a10a-cbb9a0cc4358] Running
	I0816 22:38:20.132654   18929 system_pods.go:61] "kube-apiserver-embed-certs-20210816223333-6986" [7ee444e9-f198-4d9b-985e-b190a2e5e369] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 22:38:20.132667   18929 system_pods.go:61] "kube-controller-manager-embed-certs-20210816223333-6986" [c71ecc69-d617-48d3-a162-46d27aedd0a9] Running
	I0816 22:38:20.132676   18929 system_pods.go:61] "kube-proxy-8h6xz" [7cbdd516-13c5-469b-8e60-7dc0babb699a] Running
	I0816 22:38:20.132688   18929 system_pods.go:61] "kube-scheduler-embed-certs-20210816223333-6986" [4ebf165e-13c3-4f42-a75f-4301ea2f6c78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:38:20.132698   18929 system_pods.go:61] "metrics-server-7c784ccb57-9xpsr" [6b6283cf-0668-48a4-9f21-61cb5723f0b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:38:20.132704   18929 system_pods.go:61] "storage-provisioner" [7893460e-43c2-4606-8b56-c2ed9ac764bd] Running
	I0816 22:38:20.132712   18929 system_pods.go:74] duration metric: took 18.749758ms to wait for pod list to return data ...
	I0816 22:38:20.132721   18929 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:38:20.138564   18929 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:38:20.138614   18929 node_conditions.go:123] node cpu capacity is 2
	I0816 22:38:20.138632   18929 node_conditions.go:105] duration metric: took 5.904026ms to run NodePressure ...
	I0816 22:38:20.138651   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:20.830223   18929 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:38:20.835364   18929 kubeadm.go:746] kubelet initialised
	I0816 22:38:20.835384   18929 kubeadm.go:747] duration metric: took 5.139864ms waiting for restarted kubelet to initialise ...
	I0816 22:38:20.835392   18929 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:38:20.841354   18929 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:19.191797   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:21.192936   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:25.244953   19204 api_server.go:255] stopped: https://192.168.50.186:8444/healthz: Get "https://192.168.50.186:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 22:38:22.723414   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:25.223163   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:22.860677   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:24.863916   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:23.690499   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:25.690995   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:27.691820   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:25.746028   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:27.721976   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:29.722107   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:27.361030   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:29.361218   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:30.190894   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:32.192100   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:30.746969   19204 api_server.go:255] stopped: https://192.168.50.186:8444/healthz: Get "https://192.168.50.186:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 22:38:31.245148   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:32.224115   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:34.723153   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:31.859919   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:33.863770   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:34.691552   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:36.693980   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:36.246218   19204 api_server.go:255] stopped: https://192.168.50.186:8444/healthz: Get "https://192.168.50.186:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 22:38:36.745853   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:37.223369   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:39.239225   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:36.360668   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:38.361218   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:40.871372   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:41.344967   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:38:41.344991   19204 api_server.go:101] status: https://192.168.50.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:38:41.745061   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:41.754168   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:38:41.754195   19204 api_server.go:101] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:38:42.245898   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:42.258458   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:38:42.258509   19204 api_server.go:101] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:38:42.745610   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:42.756658   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 200:
	ok
	I0816 22:38:42.770293   19204 api_server.go:139] control plane version: v1.21.3
	I0816 22:38:42.770321   19204 api_server.go:129] duration metric: took 22.526438535s to wait for apiserver health ...
	I0816 22:38:42.770332   19204 cni.go:93] Creating CNI manager for ""
	I0816 22:38:42.770339   19204 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:38:39.192176   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:41.198006   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:42.772377   19204 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:38:42.772434   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:38:42.788298   19204 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:38:42.809709   19204 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:38:42.824805   19204 system_pods.go:59] 8 kube-system pods found
	I0816 22:38:42.824843   19204 system_pods.go:61] "coredns-558bd4d5db-ssfkf" [eb30728b-0eae-41d8-90bc-d8de8c6b4caa] Running
	I0816 22:38:42.824857   19204 system_pods.go:61] "etcd-default-k8s-different-port-20210816223418-6986" [825a27d4-d8dc-4dbe-a724-ac2e59508c5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 22:38:42.824865   19204 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210816223418-6986" [a3383733-5a20-4b5a-aeab-df3e61e37d94] Running
	I0816 22:38:42.824882   19204 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210816223418-6986" [42f433b1-271b-41a6-96a0-ab85fe6ba28e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 22:38:42.824896   19204 system_pods.go:61] "kube-proxy-psg4t" [98ca6629-d521-445d-99c2-b7e7ddf3b973] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:38:42.824905   19204 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210816223418-6986" [bef50322-5dc7-4680-b867-e17eb23298a8] Running
	I0816 22:38:42.824919   19204 system_pods.go:61] "metrics-server-7c784ccb57-rmrr6" [325f4892-3ae2-4a08-bc13-22c74c15c362] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:38:42.824929   19204 system_pods.go:61] "storage-provisioner" [89aadc6c-b5b0-47eb-b6e0-0f5fb78b1689] Running
	I0816 22:38:42.824936   19204 system_pods.go:74] duration metric: took 15.209253ms to wait for pod list to return data ...
	I0816 22:38:42.824947   19204 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:38:42.835095   19204 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:38:42.835144   19204 node_conditions.go:123] node cpu capacity is 2
	I0816 22:38:42.835160   19204 node_conditions.go:105] duration metric: took 10.206913ms to run NodePressure ...
	I0816 22:38:42.835178   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:43.431532   19204 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:38:43.443469   19204 kubeadm.go:746] kubelet initialised
	I0816 22:38:43.443543   19204 kubeadm.go:747] duration metric: took 11.973692ms waiting for restarted kubelet to initialise ...
	I0816 22:38:43.443567   19204 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:38:43.467119   19204 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-ssfkf" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:43.487197   19204 pod_ready.go:92] pod "coredns-558bd4d5db-ssfkf" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:43.487224   19204 pod_ready.go:81] duration metric: took 20.062907ms waiting for pod "coredns-558bd4d5db-ssfkf" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:43.487236   19204 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:41.723036   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:43.727234   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:42.883394   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:45.360217   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:43.692394   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:46.195001   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:45.513670   19204 pod_ready.go:102] pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:47.520170   19204 pod_ready.go:102] pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:50.012608   19204 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:50.012643   19204 pod_ready.go:81] duration metric: took 6.525398312s waiting for pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:50.012653   19204 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:50.018616   19204 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:50.018632   19204 pod_ready.go:81] duration metric: took 5.971078ms waiting for pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:50.018641   19204 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:46.223793   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:48.231527   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:50.721902   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:47.864929   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:50.359955   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:48.690708   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:50.691511   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:53.191133   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:52.030327   19204 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:54.530276   19204 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:52.723113   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:54.730785   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:52.865142   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:55.362902   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:55.692797   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:58.193231   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:56.537583   19204 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:59.032998   19204 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:59.531144   19204 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:59.531179   19204 pod_ready.go:81] duration metric: took 9.512530001s waiting for pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:59.531194   19204 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-psg4t" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:57.227423   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:59.722421   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:57.860847   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:00.383065   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:00.194401   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:02.693032   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:01.045104   19204 pod_ready.go:92] pod "kube-proxy-psg4t" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.045136   19204 pod_ready.go:81] duration metric: took 1.513934389s waiting for pod "kube-proxy-psg4t" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.045162   19204 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:03.065559   19204 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:02.225371   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:04.231432   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:01.360648   18929 pod_ready.go:92] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.360679   18929 pod_ready.go:81] duration metric: took 40.519291305s waiting for pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.360692   18929 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.377816   18929 pod_ready.go:92] pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.377835   18929 pod_ready.go:81] duration metric: took 17.135128ms waiting for pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.377844   18929 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.384900   18929 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.384919   18929 pod_ready.go:81] duration metric: took 7.067915ms waiting for pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.384928   18929 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.391593   18929 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.391615   18929 pod_ready.go:81] duration metric: took 6.679953ms waiting for pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.391628   18929 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8h6xz" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.397839   18929 pod_ready.go:92] pod "kube-proxy-8h6xz" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.397859   18929 pod_ready.go:81] duration metric: took 6.224125ms waiting for pod "kube-proxy-8h6xz" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.397870   18929 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.757203   18929 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.757231   18929 pod_ready.go:81] duration metric: took 359.352415ms waiting for pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.757245   18929 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:04.166965   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:05.190883   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:07.691413   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:05.560049   19204 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:07.563106   19204 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:10.058732   19204 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:06.241105   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:08.721067   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:10.729982   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:06.173818   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:08.671197   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:10.190249   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:12.190937   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:11.058551   19204 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:11.058589   19204 pod_ready.go:81] duration metric: took 10.013415785s waiting for pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:11.058602   19204 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:13.079741   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:13.222923   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:15.223480   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:11.169568   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:13.668888   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:15.675907   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:14.691328   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:17.193097   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:15.574185   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:18.080714   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:17.721688   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:20.223136   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:18.166872   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:20.167888   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:19.690743   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:21.695097   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:20.573176   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:22.575373   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:25.080599   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:22.721982   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:24.723334   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:22.674385   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:25.168465   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:24.191127   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:26.692188   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:27.576508   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:30.077538   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:26.725975   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:29.222550   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:27.667108   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:29.672819   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:29.190076   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:31.191096   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:32.573255   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:34.574846   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:31.222778   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:33.721695   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:35.722989   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:32.167222   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:34.168925   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:33.691602   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:35.693194   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:38.192247   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:36.575818   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:39.074280   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:37.724177   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:40.222061   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:36.667227   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:38.667709   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:40.193105   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:42.691214   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:41.577819   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:44.074371   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:42.222318   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:44.223676   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:41.169382   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:43.169678   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:45.172140   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:44.692521   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:46.693152   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:46.080520   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:48.574175   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:46.226822   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:48.723407   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:50.723464   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:47.669324   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:50.168305   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:49.191566   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:51.192223   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:50.574493   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:53.072736   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:55.075288   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:53.226025   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:55.722244   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:52.667088   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:54.668826   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:53.690899   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:55.692317   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:58.190689   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:57.076942   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:59.573822   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:58.225641   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:00.721925   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:57.165321   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:59.171812   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:00.194014   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:02.691574   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:01.573901   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:04.073928   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:02.724585   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:04.724644   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:01.175154   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:03.669857   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:05.191832   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:07.693327   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:06.576903   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:09.078443   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:07.222275   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:09.224637   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:06.167190   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:08.168551   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:10.668660   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:10.191769   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:12.693193   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:11.574665   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:13.576508   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:11.224838   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:13.721159   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:15.727256   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:12.670244   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:15.167885   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:15.194325   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:17.692108   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:16.072818   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:18.078890   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:17.729812   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:20.226491   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:17.177047   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:19.217251   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:20.192280   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:22.693518   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:20.574552   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:22.574777   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:24.577476   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:22.727579   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:24.728352   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:21.668537   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:24.167106   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:25.191135   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:27.191723   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:27.075236   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:29.574554   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:27.223601   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:29.225348   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:26.172206   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:28.666902   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:30.667512   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:29.693817   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:32.192170   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:32.073947   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:34.076857   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:31.806875   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:34.222064   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:32.670097   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:35.167425   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:34.193574   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:36.692421   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:36.575233   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:39.074418   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:36.223456   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:38.224575   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:40.721673   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:37.168398   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:39.172793   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:39.196016   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:41.690324   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:41.075116   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:43.576123   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:42.724088   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:44.724675   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:41.674073   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:44.170704   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:43.693077   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:45.693362   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:48.190525   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:45.576264   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:48.077395   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:46.729980   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:49.221967   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:46.171454   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:48.665714   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:50.668334   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:50.193564   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:52.691234   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:50.572686   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:52.574382   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:55.074999   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:51.222668   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:53.226343   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:55.725259   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:52.673171   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:55.168585   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:54.692513   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:57.191126   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:57.079875   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:59.573017   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:58.221527   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:00.227502   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:57.671255   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:00.168665   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:59.691534   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:01.693478   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:01.582883   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:04.072426   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:02.722966   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:04.727296   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:02.173240   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:04.665480   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:04.191798   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:06.691447   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:06.073825   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:08.074664   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:10.075325   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:07.223517   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:09.721892   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:06.667330   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:08.671220   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:09.191192   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:11.691389   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:12.076107   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:14.575585   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:11.725914   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:13.730699   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:11.169385   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:13.673312   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:14.191060   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:16.192184   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:16.576492   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:19.076650   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:16.225569   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:18.724188   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:20.724698   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:16.165664   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:18.166105   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:20.166339   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:18.691871   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:20.691922   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:23.191074   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:21.574173   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:24.075930   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:23.223119   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:25.223978   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:22.173729   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:24.666435   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:25.692064   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:27.693165   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:26.574028   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:28.577627   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:27.723162   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:30.225428   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:26.666698   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:28.667290   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:30.669320   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:30.191236   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:32.194129   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:31.078550   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:33.574708   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:32.272795   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:34.721477   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:32.670349   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:35.166861   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:34.691270   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:36.693071   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:36.073462   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:38.075367   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:36.731674   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:39.226976   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:37.170645   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:39.724821   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:39.190190   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:41.192605   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:43.194313   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:40.572815   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:43.074323   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:41.728026   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:44.222098   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:45.713684   18923 pod_ready.go:81] duration metric: took 4m0.016600156s waiting for pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace to be "Ready" ...
	E0816 22:41:45.713707   18923 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:41:45.713739   18923 pod_ready.go:38] duration metric: took 4m11.701504099s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:41:45.713769   18923 kubeadm.go:604] restartCluster took 4m33.579475629s
	W0816 22:41:45.713944   18923 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:41:45.714027   18923 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0816 22:41:42.167746   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:44.671010   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:45.690207   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:47.696181   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:45.573577   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:47.577169   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:50.074120   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:49.532312   18923 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.817885262s)
	I0816 22:41:49.532396   18923 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:41:49.547377   18923 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:41:49.547460   18923 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:41:49.586205   18923 cri.go:76] found id: "c265ff52803b42d9b0ff6036d0ca9fe4b655c9875474baa9a0a3f7255d230e17"
	I0816 22:41:49.586231   18923 cri.go:76] found id: ""
	W0816 22:41:49.586237   18923 kubeadm.go:840] found 1 kube-system containers to stop
	I0816 22:41:49.586243   18923 cri.go:221] Stopping containers: [c265ff52803b42d9b0ff6036d0ca9fe4b655c9875474baa9a0a3f7255d230e17]
	I0816 22:41:49.586286   18923 ssh_runner.go:149] Run: which crictl
	I0816 22:41:49.590992   18923 ssh_runner.go:149] Run: sudo /bin/crictl stop c265ff52803b42d9b0ff6036d0ca9fe4b655c9875474baa9a0a3f7255d230e17
	I0816 22:41:49.626874   18923 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:41:49.635033   18923 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:41:49.643072   18923 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:41:49.643114   18923 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0816 22:41:46.671498   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:49.167852   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:50.191302   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:52.194912   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:52.573508   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:54.574289   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:51.170118   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:53.672114   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:54.691353   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:56.691660   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:57.075408   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:59.575201   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:56.166934   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:58.175241   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:00.668070   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:58.692572   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:00.693110   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:02.693563   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:02.073370   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:04.074072   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:03.171450   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:05.675018   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:05.192214   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:07.692700   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:08.829041   18923 out.go:204]   - Generating certificates and keys ...
	I0816 22:42:08.831708   18923 out.go:204]   - Booting up control plane ...
	I0816 22:42:08.834200   18923 out.go:204]   - Configuring RBAC rules ...
	I0816 22:42:08.836416   18923 cni.go:93] Creating CNI manager for ""
	I0816 22:42:08.836433   18923 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:42:06.578343   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:08.578554   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:08.838017   18923 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:42:08.838073   18923 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:42:08.846501   18923 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:42:08.869457   18923 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:42:08.869501   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:08.869527   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=no-preload-20210816223156-6986 minikube.k8s.io/updated_at=2021_08_16T22_42_08_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:09.240543   18923 ops.go:34] apiserver oom_adj: -16
	I0816 22:42:09.240662   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:09.839173   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:10.338906   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:10.839126   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:08.175656   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:10.670201   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:09.693093   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:12.193949   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:11.076847   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:13.572667   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:11.339623   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:11.839145   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:12.339335   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:12.839352   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:13.339016   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:13.838633   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:14.339209   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:14.839574   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:15.338605   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:15.838986   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:13.166828   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:15.170558   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:14.195434   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:16.691097   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:17.183312   18635 pod_ready.go:81] duration metric: took 4m0.398928004s waiting for pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace to be "Ready" ...
	E0816 22:42:17.183337   18635 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:42:17.183357   18635 pod_ready.go:38] duration metric: took 4m51.857756569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:42:17.183387   18635 kubeadm.go:604] restartCluster took 5m19.62322748s
	W0816 22:42:17.183554   18635 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:42:17.183589   18635 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0816 22:42:15.573445   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:17.576213   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:19.578780   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:16.339618   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:16.839112   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:17.338889   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:17.838606   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:18.339509   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:18.839537   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:19.338632   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:19.839240   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:20.339527   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:20.838664   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:17.671899   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:19.672963   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:20.586991   18635 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.403367986s)
	I0816 22:42:20.587083   18635 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:42:20.603414   18635 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:42:20.603499   18635 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:42:20.644469   18635 cri.go:76] found id: ""
	I0816 22:42:20.644547   18635 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:42:20.654179   18635 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:42:20.664747   18635 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:42:20.664790   18635 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
	I0816 22:42:21.326940   18635 out.go:204]   - Generating certificates and keys ...
	I0816 22:42:21.189008   18923 kubeadm.go:985] duration metric: took 12.319564991s to wait for elevateKubeSystemPrivileges.
	I0816 22:42:21.189042   18923 kubeadm.go:392] StartCluster complete in 5m9.132482632s
	I0816 22:42:21.189068   18923 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:42:21.189186   18923 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:42:21.191084   18923 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0816 22:42:21.253468   18923 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0816 22:42:22.263255   18923 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20210816223156-6986" rescaled to 1
	I0816 22:42:22.263323   18923 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.116.66 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0816 22:42:22.265111   18923 out.go:177] * Verifying Kubernetes components...
	I0816 22:42:22.265169   18923 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:42:22.263389   18923 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:42:22.263413   18923 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:42:22.265318   18923 addons.go:59] Setting storage-provisioner=true in profile "no-preload-20210816223156-6986"
	I0816 22:42:22.265343   18923 addons.go:135] Setting addon storage-provisioner=true in "no-preload-20210816223156-6986"
	I0816 22:42:22.265343   18923 addons.go:59] Setting dashboard=true in profile "no-preload-20210816223156-6986"
	W0816 22:42:22.265352   18923 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:42:22.265365   18923 addons.go:135] Setting addon dashboard=true in "no-preload-20210816223156-6986"
	W0816 22:42:22.265384   18923 addons.go:147] addon dashboard should already be in state true
	I0816 22:42:22.263563   18923 config.go:177] Loaded profile config "no-preload-20210816223156-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0816 22:42:22.265401   18923 addons.go:59] Setting metrics-server=true in profile "no-preload-20210816223156-6986"
	I0816 22:42:22.265412   18923 host.go:66] Checking if "no-preload-20210816223156-6986" exists ...
	I0816 22:42:22.265427   18923 addons.go:135] Setting addon metrics-server=true in "no-preload-20210816223156-6986"
	W0816 22:42:22.265437   18923 addons.go:147] addon metrics-server should already be in state true
	I0816 22:42:22.265384   18923 addons.go:59] Setting default-storageclass=true in profile "no-preload-20210816223156-6986"
	I0816 22:42:22.265462   18923 host.go:66] Checking if "no-preload-20210816223156-6986" exists ...
	I0816 22:42:22.265390   18923 host.go:66] Checking if "no-preload-20210816223156-6986" exists ...
	I0816 22:42:22.265461   18923 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20210816223156-6986"
	I0816 22:42:22.265940   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.265944   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.265957   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.265942   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.265975   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.265986   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.266089   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.266123   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.281969   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45777
	I0816 22:42:22.282708   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.282877   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40837
	I0816 22:42:22.283046   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42349
	I0816 22:42:22.283302   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.283322   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.283427   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.283650   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.283893   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.284078   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.284092   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.284330   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.284347   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.284461   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.284627   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.284665   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.284970   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.285003   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.285116   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.285285   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.293128   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38523
	I0816 22:42:22.293558   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.294059   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.294082   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.294429   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.294987   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.295053   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.298092   18923 addons.go:135] Setting addon default-storageclass=true in "no-preload-20210816223156-6986"
	W0816 22:42:22.298118   18923 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:42:22.298147   18923 host.go:66] Checking if "no-preload-20210816223156-6986" exists ...
	I0816 22:42:22.298560   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.298601   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.302416   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44833
	I0816 22:42:22.302994   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.303562   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.303593   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.304002   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.304209   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.305854   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34121
	I0816 22:42:22.306273   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.307236   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.307263   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.307631   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.307783   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.308340   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .DriverName
	I0816 22:42:22.310958   18923 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:42:22.311023   18923 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:42:22.311044   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:42:22.311064   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHHostname
	I0816 22:42:22.311377   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .DriverName
	I0816 22:42:22.313216   18923 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:42:22.311947   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45779
	I0816 22:42:22.313321   18923 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:42:22.313337   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:42:22.312981   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38669
	I0816 22:42:22.313354   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHHostname
	I0816 22:42:22.313674   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.313848   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.314124   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.314144   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.314391   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.314413   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.314493   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.314698   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.314875   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.315544   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.315591   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.319514   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .DriverName
	I0816 22:42:22.319736   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.321507   18923 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:42:22.320102   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:66:08", ip: ""} in network mk-no-preload-20210816223156-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:37:07 +0000 UTC Type:0 Mac:52:54:00:1e:66:08 Iaid: IPaddr:192.168.116.66 Prefix:24 Hostname:no-preload-20210816223156-6986 Clientid:01:52:54:00:1e:66:08}
	I0816 22:42:22.320309   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHPort
	I0816 22:42:22.320694   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.321331   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHPort
	I0816 22:42:22.321669   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:66:08", ip: ""} in network mk-no-preload-20210816223156-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:37:07 +0000 UTC Type:0 Mac:52:54:00:1e:66:08 Iaid: IPaddr:192.168.116.66 Prefix:24 Hostname:no-preload-20210816223156-6986 Clientid:01:52:54:00:1e:66:08}
	I0816 22:42:22.321594   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined IP address 192.168.116.66 and MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.180281   18635 out.go:204]   - Booting up control plane ...
	I0816 22:42:22.073806   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:24.079495   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:22.323189   18923 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:42:22.321708   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined IP address 192.168.116.66 and MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.321766   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHKeyPath
	I0816 22:42:22.321808   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHKeyPath
	I0816 22:42:22.323243   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:42:22.323341   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:42:22.323363   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHHostname
	I0816 22:42:22.323468   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHUsername
	I0816 22:42:22.323473   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHUsername
	I0816 22:42:22.323663   18923 sshutil.go:53] new ssh client: &{IP:192.168.116.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816223156-6986/id_rsa Username:docker}
	I0816 22:42:22.323678   18923 sshutil.go:53] new ssh client: &{IP:192.168.116.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816223156-6986/id_rsa Username:docker}
	I0816 22:42:22.328724   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45831
	I0816 22:42:22.329130   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.329535   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.329554   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.329851   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.329938   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.330124   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.330329   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:66:08", ip: ""} in network mk-no-preload-20210816223156-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:37:07 +0000 UTC Type:0 Mac:52:54:00:1e:66:08 Iaid: IPaddr:192.168.116.66 Prefix:24 Hostname:no-preload-20210816223156-6986 Clientid:01:52:54:00:1e:66:08}
	I0816 22:42:22.330363   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined IP address 192.168.116.66 and MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.330478   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHPort
	I0816 22:42:22.330620   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHKeyPath
	I0816 22:42:22.330750   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHUsername
	I0816 22:42:22.330873   18923 sshutil.go:53] new ssh client: &{IP:192.168.116.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816223156-6986/id_rsa Username:docker}
	I0816 22:42:22.333001   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .DriverName
	I0816 22:42:22.333246   18923 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:42:22.333262   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:42:22.333279   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHHostname
	I0816 22:42:22.338603   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.339024   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:66:08", ip: ""} in network mk-no-preload-20210816223156-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:37:07 +0000 UTC Type:0 Mac:52:54:00:1e:66:08 Iaid: IPaddr:192.168.116.66 Prefix:24 Hostname:no-preload-20210816223156-6986 Clientid:01:52:54:00:1e:66:08}
	I0816 22:42:22.339055   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined IP address 192.168.116.66 and MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.339242   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHPort
	I0816 22:42:22.339393   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHKeyPath
	I0816 22:42:22.339570   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHUsername
	I0816 22:42:22.339731   18923 sshutil.go:53] new ssh client: &{IP:192.168.116.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816223156-6986/id_rsa Username:docker}
	I0816 22:42:22.671302   18923 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:42:22.671331   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:42:22.674471   18923 node_ready.go:35] waiting up to 6m0s for node "no-preload-20210816223156-6986" to be "Ready" ...
	I0816 22:42:22.674764   18923 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.116.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:42:22.680985   18923 node_ready.go:49] node "no-preload-20210816223156-6986" has status "Ready":"True"
	I0816 22:42:22.681006   18923 node_ready.go:38] duration metric: took 6.219914ms waiting for node "no-preload-20210816223156-6986" to be "Ready" ...
	I0816 22:42:22.681017   18923 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:42:22.690584   18923 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:22.758871   18923 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:42:22.908102   18923 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:42:22.908132   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:42:23.011738   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:42:23.011768   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:42:23.048103   18923 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:42:23.113442   18923 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:42:23.113472   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:42:23.311431   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:42:23.311461   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:42:23.413450   18923 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:42:23.601523   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:42:23.601554   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:42:23.797882   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:42:23.797908   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:42:23.957080   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:42:23.957109   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:42:24.496102   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:42:24.496134   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:42:24.715720   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:42:24.715807   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:42:24.725833   18923 pod_ready.go:102] pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:24.991135   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:42:24.991165   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:42:25.061259   18923 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.116.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.386242884s)
	I0816 22:42:25.061297   18923 start.go:728] {"host.minikube.internal": 192.168.116.1} host record injected into CoreDNS
	I0816 22:42:25.085411   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:42:25.085463   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:42:25.132722   18923 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:42:25.402705   18923 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.64379015s)
	I0816 22:42:25.402772   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.402790   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.403123   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.403222   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:25.403245   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.403270   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.403197   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:25.403597   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:25.404574   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.404594   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:25.404607   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.404616   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.404837   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.404878   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:25.431424   18923 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.383276848s)
	I0816 22:42:25.431470   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.431484   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.431767   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.431781   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:25.431788   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:25.431799   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.431810   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.432092   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.432111   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:22.168138   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:24.174050   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:26.094382   18923 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.680878058s)
	I0816 22:42:26.094446   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:26.094474   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:26.094773   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:26.094830   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:26.094859   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:26.094885   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:26.094774   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:26.095167   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:26.095182   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:26.095193   18923 addons.go:313] Verifying addon metrics-server=true in "no-preload-20210816223156-6986"
	I0816 22:42:26.855647   18923 pod_ready.go:102] pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:27.149522   18923 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.016735128s)
	I0816 22:42:27.149590   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:27.149605   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:27.149955   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:27.150053   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:27.150073   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:27.150083   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:27.150094   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:27.150330   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:27.150347   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:26.575022   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:28.575534   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:27.153345   18923 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0816 22:42:27.153375   18923 addons.go:344] enableAddons completed in 4.88997344s
	I0816 22:42:28.729990   18923 pod_ready.go:92] pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:28.730033   18923 pod_ready.go:81] duration metric: took 6.039413295s waiting for pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:28.730047   18923 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-vf8l9" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.743600   18923 pod_ready.go:97] error getting pod "coredns-78fcd69978-vf8l9" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-vf8l9" not found
	I0816 22:42:30.743642   18923 pod_ready.go:81] duration metric: took 2.013586217s waiting for pod "coredns-78fcd69978-vf8l9" in "kube-system" namespace to be "Ready" ...
	E0816 22:42:30.743656   18923 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-78fcd69978-vf8l9" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-vf8l9" not found
	I0816 22:42:30.743666   18923 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.757721   18923 pod_ready.go:92] pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.757745   18923 pod_ready.go:81] duration metric: took 14.064042ms waiting for pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.757758   18923 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.767053   18923 pod_ready.go:92] pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.767087   18923 pod_ready.go:81] duration metric: took 9.317684ms waiting for pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.767102   18923 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.777595   18923 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.777619   18923 pod_ready.go:81] duration metric: took 10.507966ms waiting for pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.777632   18923 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jhqbx" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.790967   18923 pod_ready.go:92] pod "kube-proxy-jhqbx" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.790991   18923 pod_ready.go:81] duration metric: took 13.350231ms waiting for pod "kube-proxy-jhqbx" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.791003   18923 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:26.174733   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:28.675892   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:30.951607   18923 pod_ready.go:92] pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.951630   18923 pod_ready.go:81] duration metric: took 160.617881ms waiting for pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.951642   18923 pod_ready.go:38] duration metric: took 8.270610362s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:42:30.951663   18923 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:42:30.951723   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:42:30.970609   18923 api_server.go:70] duration metric: took 8.707242252s to wait for apiserver process to appear ...
	I0816 22:42:30.970637   18923 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:42:30.970650   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:42:30.979459   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 200:
	ok
	I0816 22:42:30.980742   18923 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:42:30.980766   18923 api_server.go:129] duration metric: took 10.122149ms to wait for apiserver health ...
	I0816 22:42:30.980777   18923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:42:31.156911   18923 system_pods.go:59] 8 kube-system pods found
	I0816 22:42:31.156942   18923 system_pods.go:61] "coredns-78fcd69978-9rlk6" [83d3b042-5692-4c44-b6f2-65020120666e] Running
	I0816 22:42:31.156949   18923 system_pods.go:61] "etcd-no-preload-20210816223156-6986" [4ea832d0-9ae2-4d9e-9aed-aa581e1ef4cf] Running
	I0816 22:42:31.156956   18923 system_pods.go:61] "kube-apiserver-no-preload-20210816223156-6986" [048331fe-fed3-4cca-a2c6-3b377e26647b] Running
	I0816 22:42:31.156965   18923 system_pods.go:61] "kube-controller-manager-no-preload-20210816223156-6986" [704424de-e6c8-4c70-bd80-c9b68c4c4b57] Running
	I0816 22:42:31.156971   18923 system_pods.go:61] "kube-proxy-jhqbx" [edacd358-46da-4db4-a8db-098f6edefb76] Running
	I0816 22:42:31.156977   18923 system_pods.go:61] "kube-scheduler-no-preload-20210816223156-6986" [7674933f-6de0-4090-867c-c082293d963c] Running
	I0816 22:42:31.156988   18923 system_pods.go:61] "metrics-server-7c784ccb57-dfjww" [7a744b20-6d7f-4001-a322-7e5615cbf15f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:42:31.156998   18923 system_pods.go:61] "storage-provisioner" [c8a5ee3a-f5df-4eeb-92b1-443ecef04ea2] Running
	I0816 22:42:31.157005   18923 system_pods.go:74] duration metric: took 176.222595ms to wait for pod list to return data ...
	I0816 22:42:31.157016   18923 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:42:31.345286   18923 default_sa.go:45] found service account: "default"
	I0816 22:42:31.345311   18923 default_sa.go:55] duration metric: took 188.289571ms for default service account to be created ...
	I0816 22:42:31.345319   18923 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:42:31.555450   18923 system_pods.go:86] 8 kube-system pods found
	I0816 22:42:31.555481   18923 system_pods.go:89] "coredns-78fcd69978-9rlk6" [83d3b042-5692-4c44-b6f2-65020120666e] Running
	I0816 22:42:31.555490   18923 system_pods.go:89] "etcd-no-preload-20210816223156-6986" [4ea832d0-9ae2-4d9e-9aed-aa581e1ef4cf] Running
	I0816 22:42:31.555497   18923 system_pods.go:89] "kube-apiserver-no-preload-20210816223156-6986" [048331fe-fed3-4cca-a2c6-3b377e26647b] Running
	I0816 22:42:31.555503   18923 system_pods.go:89] "kube-controller-manager-no-preload-20210816223156-6986" [704424de-e6c8-4c70-bd80-c9b68c4c4b57] Running
	I0816 22:42:31.555509   18923 system_pods.go:89] "kube-proxy-jhqbx" [edacd358-46da-4db4-a8db-098f6edefb76] Running
	I0816 22:42:31.555515   18923 system_pods.go:89] "kube-scheduler-no-preload-20210816223156-6986" [7674933f-6de0-4090-867c-c082293d963c] Running
	I0816 22:42:31.555529   18923 system_pods.go:89] "metrics-server-7c784ccb57-dfjww" [7a744b20-6d7f-4001-a322-7e5615cbf15f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:42:31.555541   18923 system_pods.go:89] "storage-provisioner" [c8a5ee3a-f5df-4eeb-92b1-443ecef04ea2] Running
	I0816 22:42:31.555553   18923 system_pods.go:126] duration metric: took 210.228822ms to wait for k8s-apps to be running ...
	I0816 22:42:31.555566   18923 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:42:31.555615   18923 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:42:31.581892   18923 system_svc.go:56] duration metric: took 26.318542ms WaitForService to wait for kubelet.
	I0816 22:42:31.581920   18923 kubeadm.go:547] duration metric: took 9.318562144s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:42:31.581949   18923 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:42:31.744656   18923 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:42:31.744683   18923 node_conditions.go:123] node cpu capacity is 2
	I0816 22:42:31.744699   18923 node_conditions.go:105] duration metric: took 162.745304ms to run NodePressure ...
	I0816 22:42:31.744708   18923 start.go:231] waiting for startup goroutines ...
	I0816 22:42:31.799332   18923 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0816 22:42:31.801873   18923 out.go:177] 
	W0816 22:42:31.802045   18923 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0816 22:42:31.803807   18923 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0816 22:42:31.805603   18923 out.go:177] * Done! kubectl is now configured to use "no-preload-20210816223156-6986" cluster and "default" namespace by default
	I0816 22:42:34.356504   18635 out.go:204]   - Configuring RBAC rules ...
	I0816 22:42:34.810198   18635 cni.go:93] Creating CNI manager for ""
	I0816 22:42:34.810227   18635 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:42:30.576523   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:33.074048   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:35.075110   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:31.178766   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:33.673945   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:35.674516   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:34.812149   18635 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:42:34.812218   18635 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:42:34.823097   18635 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:42:34.840052   18635 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:42:34.840175   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=old-k8s-version-20210816223154-6986 minikube.k8s.io/updated_at=2021_08_16T22_42_34_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:34.840179   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:35.279911   18635 ops.go:34] apiserver oom_adj: 16
	I0816 22:42:35.279930   18635 ops.go:39] adjusting apiserver oom_adj to -10
	I0816 22:42:35.279944   18635 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:42:35.279997   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:35.887807   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:36.388228   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:36.888072   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:37.388131   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:37.888197   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:37.075407   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:39.574205   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:38.169080   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:40.669388   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:38.388192   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:38.887529   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:39.387314   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:39.887397   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:40.388222   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:40.887817   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:41.388165   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:41.887336   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:42.387710   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:42.887452   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:41.575892   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:44.074399   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:43.168677   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:45.674667   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:43.388233   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:43.888191   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:44.388190   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:44.888073   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:45.387300   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:45.887633   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:46.388266   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:46.887918   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:47.387283   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:47.887770   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:46.074552   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:48.573015   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:48.387776   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:48.888189   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:49.388262   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:49.887594   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:50.137803   18635 kubeadm.go:985] duration metric: took 15.297678668s to wait for elevateKubeSystemPrivileges.
	I0816 22:42:50.137838   18635 kubeadm.go:392] StartCluster complete in 5m52.622280434s
	I0816 22:42:50.137865   18635 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:42:50.137996   18635 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:42:50.140032   18635 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:42:50.769953   18635 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20210816223154-6986" rescaled to 1
	I0816 22:42:50.770028   18635 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.94.246 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0816 22:42:50.771768   18635 out.go:177] * Verifying Kubernetes components...
	I0816 22:42:50.771833   18635 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:42:50.770075   18635 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:42:50.770097   18635 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:42:50.770295   18635 config.go:177] Loaded profile config "old-k8s-version-20210816223154-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0816 22:42:50.771981   18635 addons.go:59] Setting default-storageclass=true in profile "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.771981   18635 addons.go:59] Setting storage-provisioner=true in profile "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.771999   18635 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.772004   18635 addons.go:135] Setting addon storage-provisioner=true in "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.771995   18635 addons.go:59] Setting dashboard=true in profile "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.772027   18635 addons.go:135] Setting addon dashboard=true in "old-k8s-version-20210816223154-6986"
	W0816 22:42:50.772039   18635 addons.go:147] addon dashboard should already be in state true
	I0816 22:42:50.772074   18635 host.go:66] Checking if "old-k8s-version-20210816223154-6986" exists ...
	I0816 22:42:50.771981   18635 addons.go:59] Setting metrics-server=true in profile "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.772106   18635 addons.go:135] Setting addon metrics-server=true in "old-k8s-version-20210816223154-6986"
	W0816 22:42:50.772118   18635 addons.go:147] addon metrics-server should already be in state true
	I0816 22:42:50.772143   18635 host.go:66] Checking if "old-k8s-version-20210816223154-6986" exists ...
	W0816 22:42:50.772012   18635 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:42:50.772202   18635 host.go:66] Checking if "old-k8s-version-20210816223154-6986" exists ...
	I0816 22:42:50.772450   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.772491   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.772514   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.772550   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.772562   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.772590   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.772850   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.772907   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.786384   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43919
	I0816 22:42:50.786896   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.787436   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.787463   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.787854   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.788085   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.788330   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36229
	I0816 22:42:50.788749   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.789268   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.789290   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.789622   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.790176   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.790222   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.795830   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42399
	I0816 22:42:50.795865   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0816 22:42:50.796347   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.796355   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.796868   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.796888   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.796872   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.796936   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.797257   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.797329   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.797807   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.797848   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.797871   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.797906   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.799195   18635 addons.go:135] Setting addon default-storageclass=true in "old-k8s-version-20210816223154-6986"
	W0816 22:42:50.799218   18635 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:42:50.799243   18635 host.go:66] Checking if "old-k8s-version-20210816223154-6986" exists ...
	I0816 22:42:50.799640   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.799681   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.810531   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40999
	I0816 22:42:50.811204   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.811785   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.811802   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.812347   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.812540   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.815618   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44099
	I0816 22:42:50.815827   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34305
	I0816 22:42:50.816141   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.816227   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.816697   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.816714   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.816835   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.816854   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.817100   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.817172   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.817189   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .DriverName
	I0816 22:42:50.817352   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.819885   18635 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:42:50.817704   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.820954   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .DriverName
	I0816 22:42:50.821662   18635 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:42:50.821713   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.821719   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:42:50.821731   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:42:50.821750   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHHostname
	I0816 22:42:50.823437   18635 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:42:50.822272   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33579
	I0816 22:42:50.823493   18635 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:42:50.823505   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:42:50.823522   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHHostname
	I0816 22:42:50.823823   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.824293   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.824311   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.824702   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.824895   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.828911   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.828954   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .DriverName
	I0816 22:42:47.677798   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:50.171236   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:50.830871   18635 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:42:50.830990   18635 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:42:50.831003   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:42:50.831019   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHHostname
	I0816 22:42:50.829748   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:23:71", ip: ""} in network mk-old-k8s-version-20210816223154-6986: {Iface:virbr6 ExpiryTime:2021-08-16 23:36:29 +0000 UTC Type:0 Mac:52:54:00:bf:23:71 Iaid: IPaddr:192.168.94.246 Prefix:24 Hostname:old-k8s-version-20210816223154-6986 Clientid:01:52:54:00:bf:23:71}
	I0816 22:42:50.831084   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined IP address 192.168.94.246 and MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.829926   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHPort
	I0816 22:42:50.830586   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.831142   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:23:71", ip: ""} in network mk-old-k8s-version-20210816223154-6986: {Iface:virbr6 ExpiryTime:2021-08-16 23:36:29 +0000 UTC Type:0 Mac:52:54:00:bf:23:71 Iaid: IPaddr:192.168.94.246 Prefix:24 Hostname:old-k8s-version-20210816223154-6986 Clientid:01:52:54:00:bf:23:71}
	I0816 22:42:50.831171   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined IP address 192.168.94.246 and MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.831303   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHKeyPath
	I0816 22:42:50.831452   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHUsername
	I0816 22:42:50.831626   18635 sshutil.go:53] new ssh client: &{IP:192.168.94.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816223154-6986/id_rsa Username:docker}
	I0816 22:42:50.831935   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHPort
	I0816 22:42:50.832101   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHKeyPath
	I0816 22:42:50.832284   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHUsername
	I0816 22:42:50.832496   18635 sshutil.go:53] new ssh client: &{IP:192.168.94.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816223154-6986/id_rsa Username:docker}
	I0816 22:42:50.835565   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34581
	I0816 22:42:50.836045   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.836624   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.836646   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.836952   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.837022   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.837210   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.837385   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:23:71", ip: ""} in network mk-old-k8s-version-20210816223154-6986: {Iface:virbr6 ExpiryTime:2021-08-16 23:36:29 +0000 UTC Type:0 Mac:52:54:00:bf:23:71 Iaid: IPaddr:192.168.94.246 Prefix:24 Hostname:old-k8s-version-20210816223154-6986 Clientid:01:52:54:00:bf:23:71}
	I0816 22:42:50.837420   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined IP address 192.168.94.246 and MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.837596   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHPort
	I0816 22:42:50.837797   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHKeyPath
	I0816 22:42:50.837973   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHUsername
	I0816 22:42:50.838150   18635 sshutil.go:53] new ssh client: &{IP:192.168.94.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816223154-6986/id_rsa Username:docker}
	I0816 22:42:50.839968   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .DriverName
	I0816 22:42:50.840224   18635 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:42:50.840241   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:42:50.840256   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHHostname
	I0816 22:42:50.846248   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.846622   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:23:71", ip: ""} in network mk-old-k8s-version-20210816223154-6986: {Iface:virbr6 ExpiryTime:2021-08-16 23:36:29 +0000 UTC Type:0 Mac:52:54:00:bf:23:71 Iaid: IPaddr:192.168.94.246 Prefix:24 Hostname:old-k8s-version-20210816223154-6986 Clientid:01:52:54:00:bf:23:71}
	I0816 22:42:50.846648   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined IP address 192.168.94.246 and MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.846901   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHPort
	I0816 22:42:50.847072   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHKeyPath
	I0816 22:42:50.847256   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHUsername
	I0816 22:42:50.847384   18635 sshutil.go:53] new ssh client: &{IP:192.168.94.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816223154-6986/id_rsa Username:docker}
	I0816 22:42:51.069324   18635 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20210816223154-6986" to be "Ready" ...
	I0816 22:42:51.069363   18635 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:42:51.074198   18635 node_ready.go:49] node "old-k8s-version-20210816223154-6986" has status "Ready":"True"
	I0816 22:42:51.074219   18635 node_ready.go:38] duration metric: took 4.853226ms waiting for node "old-k8s-version-20210816223154-6986" to be "Ready" ...
	I0816 22:42:51.074228   18635 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:42:51.079427   18635 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:51.095977   18635 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:42:51.095994   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:42:51.114667   18635 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:42:51.127402   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:42:51.127423   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:42:51.139080   18635 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:42:51.142203   18635 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:42:51.142227   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:42:51.184024   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:42:51.184049   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:42:51.229690   18635 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:42:51.229719   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:42:51.258163   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:42:51.258186   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:42:51.292848   18635 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:42:51.348950   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:42:51.348979   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:42:51.432982   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:42:51.433017   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:42:51.500730   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:42:51.500762   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:42:51.566104   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:42:51.566132   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:42:51.669547   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:42:51.669569   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:42:51.755011   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:42:51.755042   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:42:51.807684   18635 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:42:52.571594   18635 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.502197835s)
	I0816 22:42:52.571636   18635 start.go:728] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0816 22:42:52.759651   18635 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.644944376s)
	I0816 22:42:52.759687   18635 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.620572399s)
	I0816 22:42:52.759727   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.759743   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.759751   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.759765   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.760012   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.760058   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:52.760071   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.760080   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.760115   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.760131   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:52.760156   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.760170   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.761684   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.761690   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | Closing plugin on server side
	I0816 22:42:52.761704   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:52.761719   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | Closing plugin on server side
	I0816 22:42:52.761689   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.761794   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:52.761806   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.761817   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.762085   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.762108   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:53.390381   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:53.699731   18635 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.406829667s)
	I0816 22:42:53.699820   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:53.699836   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:53.700202   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:53.700222   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:53.700238   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:53.700249   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:53.700503   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:53.700523   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:53.700538   18635 addons.go:313] Verifying addon metrics-server=true in "old-k8s-version-20210816223154-6986"
	I0816 22:42:54.131359   18635 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.323617191s)
	I0816 22:42:54.131419   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:54.131434   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:54.131720   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | Closing plugin on server side
	I0816 22:42:54.131759   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:54.131767   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:54.131782   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:54.131793   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:54.132029   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:54.132048   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:50.574063   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:53.075372   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:52.670047   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:54.673975   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:54.134079   18635 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0816 22:42:54.134104   18635 addons.go:344] enableAddons completed in 3.364015112s
	I0816 22:42:55.589126   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:57.594328   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:55.581048   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:58.075675   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:57.167077   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:59.670483   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:59.594568   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:02.093248   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:00.574293   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:02.574884   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:05.075277   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:02.159000   18929 pod_ready.go:81] duration metric: took 4m0.401738783s waiting for pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace to be "Ready" ...
	E0816 22:43:02.159021   18929 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:43:02.159049   18929 pod_ready.go:38] duration metric: took 4m41.323642164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:02.159079   18929 kubeadm.go:604] restartCluster took 5m14.823391905s
	W0816 22:43:02.159203   18929 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:43:02.159238   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0816 22:43:05.238090   18929 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.078818721s)
	I0816 22:43:05.238168   18929 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:43:05.256580   18929 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:43:05.256649   18929 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:43:05.300644   18929 cri.go:76] found id: ""
	I0816 22:43:05.300755   18929 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:43:05.308191   18929 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:43:05.315888   18929 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:43:05.315936   18929 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0816 22:43:05.885054   18929 out.go:204]   - Generating certificates and keys ...
	I0816 22:43:04.591211   18635 pod_ready.go:92] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:04.591250   18635 pod_ready.go:81] duration metric: took 13.511789308s waiting for pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:04.591266   18635 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jmg6d" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:04.598816   18635 pod_ready.go:92] pod "kube-proxy-jmg6d" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:04.598833   18635 pod_ready.go:81] duration metric: took 7.559474ms waiting for pod "kube-proxy-jmg6d" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:04.598842   18635 pod_ready.go:38] duration metric: took 13.524600915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:04.598861   18635 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:43:04.598908   18635 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:43:04.615708   18635 api_server.go:70] duration metric: took 13.845635855s to wait for apiserver process to appear ...
	I0816 22:43:04.615739   18635 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:43:04.615748   18635 api_server.go:239] Checking apiserver healthz at https://192.168.94.246:8443/healthz ...
	I0816 22:43:04.624860   18635 api_server.go:265] https://192.168.94.246:8443/healthz returned 200:
	ok
	I0816 22:43:04.626456   18635 api_server.go:139] control plane version: v1.14.0
	I0816 22:43:04.626478   18635 api_server.go:129] duration metric: took 10.733471ms to wait for apiserver health ...
	I0816 22:43:04.626487   18635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:43:04.631832   18635 system_pods.go:59] 4 kube-system pods found
	I0816 22:43:04.631861   18635 system_pods.go:61] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.631867   18635 system_pods.go:61] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.631877   18635 system_pods.go:61] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:04.631883   18635 system_pods.go:61] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.631892   18635 system_pods.go:74] duration metric: took 5.399191ms to wait for pod list to return data ...
	I0816 22:43:04.631901   18635 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:43:04.635992   18635 default_sa.go:45] found service account: "default"
	I0816 22:43:04.636015   18635 default_sa.go:55] duration metric: took 4.107562ms for default service account to be created ...
	I0816 22:43:04.636025   18635 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:43:04.640667   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:04.640691   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.640697   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.640704   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:04.640709   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.640726   18635 retry.go:31] will retry after 305.063636ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:04.951327   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:04.951357   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.951365   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.951377   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:04.951384   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.951402   18635 retry.go:31] will retry after 338.212508ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:05.295109   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:05.295143   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.295154   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.295165   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:05.295174   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.295193   18635 retry.go:31] will retry after 378.459802ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:05.683391   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:05.683423   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.683431   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.683442   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:05.683452   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.683472   18635 retry.go:31] will retry after 469.882201ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:06.158721   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:06.158752   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.158757   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.158765   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:06.158770   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.158786   18635 retry.go:31] will retry after 667.365439ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:06.831740   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:06.831771   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.831781   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.831790   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:06.831799   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.831818   18635 retry.go:31] will retry after 597.243124ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:07.434457   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:07.434482   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:07.434487   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:07.434494   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:07.434499   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:07.434513   18635 retry.go:31] will retry after 789.889932ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:07.075753   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:09.575726   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:06.996973   18929 out.go:204]   - Booting up control plane ...
	I0816 22:43:08.229786   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:08.229819   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:08.229827   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:08.229840   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:08.229845   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:08.229863   18635 retry.go:31] will retry after 951.868007ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:09.187817   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:09.187852   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:09.187862   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:09.187873   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:09.187878   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:09.187895   18635 retry.go:31] will retry after 1.341783893s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:10.534567   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:10.534608   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:10.534615   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:10.534627   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:10.534634   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:10.534652   18635 retry.go:31] will retry after 1.876813009s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:12.418546   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:12.418572   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:12.418579   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:12.418590   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:12.418596   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:12.418612   18635 retry.go:31] will retry after 2.6934314s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:11.066632   19204 pod_ready.go:81] duration metric: took 4m0.008014176s waiting for pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace to be "Ready" ...
	E0816 22:43:11.066660   19204 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:43:11.066679   19204 pod_ready.go:38] duration metric: took 4m27.623084704s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:11.066704   19204 kubeadm.go:604] restartCluster took 5m3.415779611s
	W0816 22:43:11.066819   19204 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:43:11.066856   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0816 22:43:14.269873   19204 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.202987817s)
	I0816 22:43:14.269950   19204 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:43:14.288386   19204 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:43:14.288469   19204 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:43:14.333856   19204 cri.go:76] found id: ""
	I0816 22:43:14.333935   19204 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:43:14.343737   19204 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:43:14.352599   19204 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:43:14.352646   19204 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0816 22:43:14.930093   19204 out.go:204]   - Generating certificates and keys ...
	I0816 22:43:15.118830   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:15.118862   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:15.118872   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:15.118882   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:15.118889   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:15.118907   18635 retry.go:31] will retry after 2.494582248s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:17.619339   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:17.619375   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:17.619384   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:17.619395   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:17.619403   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:17.619422   18635 retry.go:31] will retry after 3.420895489s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:15.729873   19204 out.go:204]   - Booting up control plane ...
	I0816 22:43:21.047237   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:21.047269   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:21.047276   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:21.047287   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:21.047294   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:21.047310   18635 retry.go:31] will retry after 4.133785681s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:22.636356   18929 out.go:204]   - Configuring RBAC rules ...
	I0816 22:43:23.371015   18929 cni.go:93] Creating CNI manager for ""
	I0816 22:43:23.371043   18929 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:43:23.373006   18929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:43:23.373076   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:43:23.386712   18929 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:43:23.415554   18929 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:43:23.415693   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:23.415773   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=embed-certs-20210816223333-6986 minikube.k8s.io/updated_at=2021_08_16T22_43_23_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:24.042222   18929 ops.go:34] apiserver oom_adj: -16
	I0816 22:43:24.042207   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:24.699493   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:25.199877   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:25.699926   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:25.189718   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:25.189751   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:25.189758   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:25.189768   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:25.189775   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:25.189795   18635 retry.go:31] will retry after 5.595921491s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	f408bf522922a       523cad1a4df73       51 seconds ago       Exited              dashboard-metrics-scraper   1                   cd35b65dbf056
	64955f6935975       9a07b5b4bfac0       58 seconds ago       Running             kubernetes-dashboard        0                   53b71c03b338b
	89a458187aec6       6e38f40d628db       About a minute ago   Exited              storage-provisioner         0                   48a5117f3b82f
	e0bb5a872be5d       8d147537fb7d1       About a minute ago   Running             coredns                     0                   d3e8dbb1065a0
	4cbc69479c98f       ea6b13ed84e03       About a minute ago   Running             kube-proxy                  0                   a2d8648503867
	1e3071c5b1c50       0048118155842       About a minute ago   Running             etcd                        2                   fdbc7e8443532
	cb12185bdacc3       7da2efaa5b480       About a minute ago   Running             kube-scheduler              2                   10155044a33d1
	a5a4025cb2615       cf9cba6c3e4a8       About a minute ago   Running             kube-controller-manager     2                   bbeb739aeb370
	f7e4e0952db6f       b2462aa94d403       About a minute ago   Running             kube-apiserver              2                   a5ef942d9974f
	f45ec9b98801b       56cc512116c8f       5 minutes ago        Exited              busybox                     1                   6f71853f758e6
	c265ff52803b4       8d147537fb7d1       5 minutes ago        Exited              coredns                     1                   6f468f8c94515
	
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2021-08-16 22:37:06 UTC, end at Mon 2021-08-16 22:43:29 UTC. --
	Aug 16 22:42:37 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:37.130703264Z" level=info msg="StartContainer for \"6e961ef114c53f4caadc1d127a23aeb672481fd3e699e195475ad5da9efadb60\" returns successfully"
	Aug 16 22:42:37 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:37.177108643Z" level=info msg="Finish piping stderr of container \"6e961ef114c53f4caadc1d127a23aeb672481fd3e699e195475ad5da9efadb60\""
	Aug 16 22:42:37 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:37.177347709Z" level=info msg="Finish piping stdout of container \"6e961ef114c53f4caadc1d127a23aeb672481fd3e699e195475ad5da9efadb60\""
	Aug 16 22:42:37 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:37.180213413Z" level=info msg="TaskExit event &TaskExit{ContainerID:6e961ef114c53f4caadc1d127a23aeb672481fd3e699e195475ad5da9efadb60,ID:6e961ef114c53f4caadc1d127a23aeb672481fd3e699e195475ad5da9efadb60,Pid:6288,ExitStatus:1,ExitedAt:2021-08-16 22:42:37.179527212 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:42:37 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:37.241473350Z" level=info msg="shim disconnected" id=6e961ef114c53f4caadc1d127a23aeb672481fd3e699e195475ad5da9efadb60
	Aug 16 22:42:37 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:37.241863109Z" level=error msg="copy shim log" error="read /proc/self/fd/128: file already closed"
	Aug 16 22:42:37 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:37.563129823Z" level=info msg="CreateContainer within sandbox \"cd35b65dbf056e2a246c7f0327763ece3bf3d994f02dd6b9afd6e03f3ea48b74\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 16 22:42:37 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:37.638635394Z" level=info msg="CreateContainer within sandbox \"cd35b65dbf056e2a246c7f0327763ece3bf3d994f02dd6b9afd6e03f3ea48b74\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2\""
	Aug 16 22:42:37 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:37.647762432Z" level=info msg="StartContainer for \"f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2\""
	Aug 16 22:42:38 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:38.031309589Z" level=info msg="StartContainer for \"f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2\" returns successfully"
	Aug 16 22:42:38 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:38.069992253Z" level=info msg="Finish piping stderr of container \"f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2\""
	Aug 16 22:42:38 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:38.070481010Z" level=info msg="Finish piping stdout of container \"f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2\""
	Aug 16 22:42:38 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:38.073080983Z" level=info msg="TaskExit event &TaskExit{ContainerID:f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2,ID:f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2,Pid:6358,ExitStatus:1,ExitedAt:2021-08-16 22:42:38.072430716 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:42:38 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:38.129822243Z" level=info msg="shim disconnected" id=f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2
	Aug 16 22:42:38 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:38.129963592Z" level=error msg="copy shim log" error="read /proc/self/fd/128: file already closed"
	Aug 16 22:42:38 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:38.574733472Z" level=info msg="RemoveContainer for \"6e961ef114c53f4caadc1d127a23aeb672481fd3e699e195475ad5da9efadb60\""
	Aug 16 22:42:38 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:38.585802037Z" level=info msg="RemoveContainer for \"6e961ef114c53f4caadc1d127a23aeb672481fd3e699e195475ad5da9efadb60\" returns successfully"
	Aug 16 22:42:39 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:39.084909344Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:42:39 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:39.089354034Z" level=info msg="trying next host" error="failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" host=fake.domain
	Aug 16 22:42:39 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:39.094999774Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Aug 16 22:42:55 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:55.144867032Z" level=info msg="TaskExit event &TaskExit{ContainerID:89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2,ID:89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2,Pid:6023,ExitStatus:255,ExitedAt:2021-08-16 22:42:55.144233318 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:42:55 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:55.144894862Z" level=info msg="Finish piping stderr of container \"89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2\""
	Aug 16 22:42:55 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:55.145828833Z" level=info msg="Finish piping stdout of container \"89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2\""
	Aug 16 22:42:55 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:55.211075164Z" level=info msg="shim disconnected" id=89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2
	Aug 16 22:42:55 no-preload-20210816223156-6986 containerd[2110]: time="2021-08-16T22:42:55.211392176Z" level=error msg="copy shim log" error="read /proc/self/fd/122: file already closed"
	
	* 
	* ==> coredns [c265ff52803b42d9b0ff6036d0ca9fe4b655c9875474baa9a0a3f7255d230e17] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = ef4aca0642b1bd212f9628ab01cc3780
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> coredns [e0bb5a872be5dee5a28ce3431d7728ab3b6d152dacd42c354ec65b03076e30d0] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.4
	linux/amd64, go1.16.4, 053c4d5
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = ef4aca0642b1bd212f9628ab01cc3780
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +3.433315] systemd-fstab-generator[1162]: Ignoring "noauto" for root device
	[  +0.033608] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.953752] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1722 comm=systemd-network
	[  +0.655674] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.304031] vboxguest: loading out-of-tree module taints kernel.
	[  +0.007344] vboxguest: PCI device not found, probably running on physical hardware.
	[  +1.886202] systemd-fstab-generator[2057]: Ignoring "noauto" for root device
	[  +0.150635] systemd-fstab-generator[2070]: Ignoring "noauto" for root device
	[  +0.197543] systemd-fstab-generator[2100]: Ignoring "noauto" for root device
	[  +6.271369] systemd-fstab-generator[2299]: Ignoring "noauto" for root device
	[ +14.991038] kauditd_printk_skb: 38 callbacks suppressed
	[  +5.035081] kauditd_printk_skb: 98 callbacks suppressed
	[  +9.853332] kauditd_printk_skb: 41 callbacks suppressed
	[Aug16 22:38] kauditd_printk_skb: 2 callbacks suppressed
	[Aug16 22:39] NFSD: Unable to end grace period: -110
	[Aug16 22:41] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.186651] systemd-fstab-generator[4557]: Ignoring "noauto" for root device
	[Aug16 22:42] systemd-fstab-generator[4945]: Ignoring "noauto" for root device
	[ +13.715349] kauditd_printk_skb: 68 callbacks suppressed
	[  +5.284900] kauditd_printk_skb: 47 callbacks suppressed
	[ +10.335714] kauditd_printk_skb: 80 callbacks suppressed
	[  +5.161347] systemd-fstab-generator[6406]: Ignoring "noauto" for root device
	[  +0.769765] systemd-fstab-generator[6462]: Ignoring "noauto" for root device
	[  +0.957588] systemd-fstab-generator[6516]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [1e3071c5b1c506432365e271b588aadbfd2eda919a23dcea85d60022046acfb4] <==
	* {"level":"info","ts":"2021-08-16T22:41:59.934Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"d79850b30a557227","local-member-id":"69ff0102d0d103a7","added-peer-id":"69ff0102d0d103a7","added-peer-peer-urls":["https://192.168.116.66:2380"]}
	{"level":"info","ts":"2021-08-16T22:41:59.934Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"69ff0102d0d103a7","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2021-08-16T22:41:59.947Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-08-16T22:41:59.950Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"69ff0102d0d103a7","initial-advertise-peer-urls":["https://192.168.116.66:2380"],"listen-peer-urls":["https://192.168.116.66:2380"],"advertise-client-urls":["https://192.168.116.66:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.116.66:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-08-16T22:41:59.950Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.116.66:2380"}
	{"level":"info","ts":"2021-08-16T22:41:59.951Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.116.66:2380"}
	{"level":"info","ts":"2021-08-16T22:41:59.950Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-08-16T22:42:00.397Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"69ff0102d0d103a7 is starting a new election at term 1"}
	{"level":"info","ts":"2021-08-16T22:42:00.398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"69ff0102d0d103a7 became pre-candidate at term 1"}
	{"level":"info","ts":"2021-08-16T22:42:00.398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"69ff0102d0d103a7 received MsgPreVoteResp from 69ff0102d0d103a7 at term 1"}
	{"level":"info","ts":"2021-08-16T22:42:00.398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"69ff0102d0d103a7 became candidate at term 2"}
	{"level":"info","ts":"2021-08-16T22:42:00.398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"69ff0102d0d103a7 received MsgVoteResp from 69ff0102d0d103a7 at term 2"}
	{"level":"info","ts":"2021-08-16T22:42:00.399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"69ff0102d0d103a7 became leader at term 2"}
	{"level":"info","ts":"2021-08-16T22:42:00.399Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 69ff0102d0d103a7 elected leader 69ff0102d0d103a7 at term 2"}
	{"level":"info","ts":"2021-08-16T22:42:00.400Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-16T22:42:00.403Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"d79850b30a557227","local-member-id":"69ff0102d0d103a7","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-16T22:42:00.403Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-16T22:42:00.403Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-16T22:42:00.403Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"69ff0102d0d103a7","local-member-attributes":"{Name:no-preload-20210816223156-6986 ClientURLs:[https://192.168.116.66:2379]}","request-path":"/0/members/69ff0102d0d103a7/attributes","cluster-id":"d79850b30a557227","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-16T22:42:00.404Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-16T22:42:00.406Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.116.66:2379"}
	{"level":"info","ts":"2021-08-16T22:42:00.404Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-16T22:42:00.408Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-08-16T22:42:00.404Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-16T22:42:00.411Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  22:44:29 up 7 min,  0 users,  load average: 0.50, 0.89, 0.49
	Linux no-preload-20210816223156-6986 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [f7e4e0952db6f386f632efe7b9c51f084992c1ef01a488ff933fd58c8c0522f0] <==
	* E0816 22:44:26.645262       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0816 22:44:26.645959       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0816 22:44:26.646684       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:44:26.646721       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:44:26.647787       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:44:26.650055       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:44:26.651265       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0816 22:44:26.652519       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0816 22:44:26.653962       1 trace.go:205] Trace[1814883465]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:e338007d-571b-4e30-a4dd-d54d1565467c,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:43:26.645) (total time: 60008ms):
	Trace[1814883465]: [1m0.008521001s] [1m0.008521001s] END
	E0816 22:44:26.654151       1 timeout.go:135] post-timeout activity - time-elapsed: 7.106136ms, GET "/api/v1/namespaces/default" result: <nil>
	I0816 22:44:26.655537       1 trace.go:205] Trace[1407321994]: "Get" url:/api/v1/namespaces/kube-public,user-agent:kube-apiserver/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:c26e4e92-0e04-4a6d-bada-48b9c48a4466,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:43:26.646) (total time: 60009ms):
	Trace[1407321994]: [1m0.009467045s] [1m0.009467045s] END
	E0816 22:44:26.655768       1 timeout.go:135] post-timeout activity - time-elapsed: 9.005496ms, GET "/api/v1/namespaces/kube-public" result: <nil>
	W0816 22:44:26.819798       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:44:28.821793       1 clientconn.go:1326] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0816 22:44:29.392274       1 trace.go:205] Trace[1089933883]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:500,continue: (16-Aug-2021 22:43:29.392) (total time: 60000ms):
	Trace[1089933883]: [1m0.00003453s] [1m0.00003453s] END
	E0816 22:44:29.392476       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0816 22:44:29.392717       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:44:29.394274       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:44:29.395722       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0816 22:44:29.396988       1 trace.go:205] Trace[1334651666]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.22.0 (linux/amd64) kubernetes/f27a086,audit-id:104ada67-5de3-499b-a955-3a308d8085b5,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (16-Aug-2021 22:43:29.392) (total time: 60004ms):
	Trace[1334651666]: [1m0.004803586s] [1m0.004803586s] END
	E0816 22:44:29.398965       1 timeout.go:135] post-timeout activity - time-elapsed: 5.861862ms, GET "/api/v1/nodes" result: <nil>
	
	* 
	* ==> kube-controller-manager [a5a4025cb261585737124bf5b619554e77896d6c93458ee1baf7d4a451c195db] <==
	* I0816 22:42:25.582064       1 event.go:291] "Event occurred" object="kube-system/metrics-server-7c784ccb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-7c784ccb57-dfjww"
	I0816 22:42:25.759031       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0816 22:42:26.271817       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0816 22:42:26.304758       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:42:26.351361       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:26.357469       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	I0816 22:42:26.382694       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:42:26.406104       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:42:26.413110       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:26.414233       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:42:26.453908       1 replica_set.go:536] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:26.454671       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:42:26.465800       1 replica_set.go:536] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:26.466791       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:42:26.566058       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-4vvg9"
	I0816 22:42:26.609058       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-d2v4k"
	E0816 22:42:51.034345       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:42:51.526952       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:43:21.081258       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:43:21.593015       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:43:49.788967       1 node_lifecycle_controller.go:1107] Error updating node no-preload-20210816223156-6986: Timeout: request did not complete within requested timeout - context deadline exceeded
	E0816 22:43:51.121656       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:43:51.645950       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:44:21.163228       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:44:21.705439       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [4cbc69479c98f3fb3b11b46c85389612ca6d3a2ee34875a8147fc2905800c7a5] <==
	* I0816 22:42:23.478842       1 node.go:172] Successfully retrieved node IP: 192.168.116.66
	I0816 22:42:23.479012       1 server_others.go:140] Detected node IP 192.168.116.66
	W0816 22:42:23.479041       1 server_others.go:565] Unknown proxy mode "", assuming iptables proxy
	W0816 22:42:23.557097       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0816 22:42:23.567700       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0816 22:42:23.567732       1 server_others.go:212] Using iptables Proxier.
	I0816 22:42:23.592778       1 server.go:649] Version: v1.22.0-rc.0
	I0816 22:42:23.645408       1 config.go:315] Starting service config controller
	I0816 22:42:23.645532       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:42:23.899540       1 config.go:224] Starting endpoint slice config controller
	I0816 22:42:23.958495       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0816 22:42:24.029672       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	E0816 22:42:24.013778       1 event_broadcaster.go:253] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"no-preload-20210816223156-6986.169beab2be77fb1e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xc03ed853e67496f1, ext:395359689, loc:(*time.Location)(0x2d7f3c0)}}, Series:(*v1.EventSeries)(nil), ReportingController:"kube-proxy", ReportingInstance:"kube-proxy-no-preload-20210816223156-6986", Action:"StartKubeProxy", Reason:"Starting", Regarding:v1.ObjectReference{Kind:"Node", Namespace:"", Name:
"no-preload-20210816223156-6986", UID:"no-preload-20210816223156-6986", APIVersion:"", ResourceVersion:"", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"", Type:"Normal", DeprecatedSource:v1.EventSource{Component:"", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}': 'Event "no-preload-20210816223156-6986.169beab2be77fb1e" is invalid: involvedObject.namespace: Invalid value: "": does not match event.namespace' (will not retry!)
	I0816 22:42:24.046205       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [cb12185bdacc363d50b453691deee789994d7ad81788b6df833ae986a27ce50a] <==
	* E0816 22:42:04.578678       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:42:04.579519       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 22:42:04.579900       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:42:04.580217       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:42:04.581045       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:42:04.581266       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:42:04.581425       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:42:04.582271       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:42:05.389984       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:42:05.425427       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:42:05.503074       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:42:05.503126       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:42:05.612892       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:42:05.633869       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:42:05.891862       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:42:05.917039       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:42:05.931237       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:42:05.936662       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:42:05.954630       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0816 22:42:05.968275       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:42:05.970200       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:42:06.060908       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:42:07.811804       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0816 22:42:07.811916       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	I0816 22:42:08.039434       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:37:06 UTC, end at Mon 2021-08-16 22:44:29 UTC. --
	Aug 16 22:42:30 no-preload-20210816223156-6986 kubelet[4954]: W0816 22:42:30.003988    4954 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/bbe6012e-b47a-4f77-a534-7acc694f08ee/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Aug 16 22:42:30 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:30.004221    4954 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbe6012e-b47a-4f77-a534-7acc694f08ee-config-volume" (OuterVolumeSpecName: "config-volume") pod "bbe6012e-b47a-4f77-a534-7acc694f08ee" (UID: "bbe6012e-b47a-4f77-a534-7acc694f08ee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Aug 16 22:42:30 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:30.018455    4954 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbe6012e-b47a-4f77-a534-7acc694f08ee-kube-api-access-jvblx" (OuterVolumeSpecName: "kube-api-access-jvblx") pod "bbe6012e-b47a-4f77-a534-7acc694f08ee" (UID: "bbe6012e-b47a-4f77-a534-7acc694f08ee"). InnerVolumeSpecName "kube-api-access-jvblx". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Aug 16 22:42:30 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:30.104268    4954 reconciler.go:319] "Volume detached for volume \"kube-api-access-jvblx\" (UniqueName: \"kubernetes.io/projected/bbe6012e-b47a-4f77-a534-7acc694f08ee-kube-api-access-jvblx\") on node \"no-preload-20210816223156-6986\" DevicePath \"\""
	Aug 16 22:42:30 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:30.104633    4954 reconciler.go:319] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bbe6012e-b47a-4f77-a534-7acc694f08ee-config-volume\") on node \"no-preload-20210816223156-6986\" DevicePath \"\""
	Aug 16 22:42:30 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:30.366497    4954 scope.go:110] "RemoveContainer" containerID="f8944f430090a618857b0168c95cd98f593e9aa65e598d7470d08e1c976f6ef8"
	Aug 16 22:42:30 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:30.463314    4954 scope.go:110] "RemoveContainer" containerID="f8944f430090a618857b0168c95cd98f593e9aa65e598d7470d08e1c976f6ef8"
	Aug 16 22:42:30 no-preload-20210816223156-6986 kubelet[4954]: E0816 22:42:30.464312    4954 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8944f430090a618857b0168c95cd98f593e9aa65e598d7470d08e1c976f6ef8\": not found" containerID="f8944f430090a618857b0168c95cd98f593e9aa65e598d7470d08e1c976f6ef8"
	Aug 16 22:42:30 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:30.464542    4954 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:f8944f430090a618857b0168c95cd98f593e9aa65e598d7470d08e1c976f6ef8} err="failed to get container status \"f8944f430090a618857b0168c95cd98f593e9aa65e598d7470d08e1c976f6ef8\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8944f430090a618857b0168c95cd98f593e9aa65e598d7470d08e1c976f6ef8\": not found"
	Aug 16 22:42:32 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:32.179007    4954 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=bbe6012e-b47a-4f77-a534-7acc694f08ee path="/var/lib/kubelet/pods/bbe6012e-b47a-4f77-a534-7acc694f08ee/volumes"
	Aug 16 22:42:34 no-preload-20210816223156-6986 kubelet[4954]: E0816 22:42:34.489789    4954 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/podedacd358-46da-4db4-a8db-098f6edefb76\": RecentStats: unable to find data in memory cache]"
	Aug 16 22:42:37 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:37.538425    4954 scope.go:110] "RemoveContainer" containerID="6e961ef114c53f4caadc1d127a23aeb672481fd3e699e195475ad5da9efadb60"
	Aug 16 22:42:38 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:38.547988    4954 scope.go:110] "RemoveContainer" containerID="6e961ef114c53f4caadc1d127a23aeb672481fd3e699e195475ad5da9efadb60"
	Aug 16 22:42:38 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:38.548504    4954 scope.go:110] "RemoveContainer" containerID="f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2"
	Aug 16 22:42:38 no-preload-20210816223156-6986 kubelet[4954]: E0816 22:42:38.561005    4954 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-d2v4k_kubernetes-dashboard(d2a31ab1-304a-4179-9e46-8625b64d8dc4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-d2v4k" podUID=d2a31ab1-304a-4179-9e46-8625b64d8dc4
	Aug 16 22:42:39 no-preload-20210816223156-6986 kubelet[4954]: E0816 22:42:39.095656    4954 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:42:39 no-preload-20210816223156-6986 kubelet[4954]: E0816 22:42:39.095775    4954 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:42:39 no-preload-20210816223156-6986 kubelet[4954]: E0816 22:42:39.095933    4954 kuberuntime_manager.go:895] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-qhpdb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{
Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vol
umeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-dfjww_kube-system(7a744b20-6d7f-4001-a322-7e5615cbf15f): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:42:39 no-preload-20210816223156-6986 kubelet[4954]: E0816 22:42:39.095982    4954 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-dfjww" podUID=7a744b20-6d7f-4001-a322-7e5615cbf15f
	Aug 16 22:42:39 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:39.553372    4954 scope.go:110] "RemoveContainer" containerID="f408bf522922a941e4e86de3ae6bdfab688ea8e11b934464aae37835c9aaa2f2"
	Aug 16 22:42:39 no-preload-20210816223156-6986 kubelet[4954]: E0816 22:42:39.554186    4954 pod_workers.go:747] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-d2v4k_kubernetes-dashboard(d2a31ab1-304a-4179-9e46-8625b64d8dc4)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-d2v4k" podUID=d2a31ab1-304a-4179-9e46-8625b64d8dc4
	Aug 16 22:42:42 no-preload-20210816223156-6986 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:42:42 no-preload-20210816223156-6986 kubelet[4954]: I0816 22:42:42.979034    4954 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 16 22:42:42 no-preload-20210816223156-6986 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:42:42 no-preload-20210816223156-6986 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [64955f6935975a441574e3eb9c36fc2051db34d53e0a734f022faaa0b3b02476] <==
	* 2021/08/16 22:42:30 Using namespace: kubernetes-dashboard
	2021/08/16 22:42:30 Using in-cluster config to connect to apiserver
	2021/08/16 22:42:30 Using secret token for csrf signing
	2021/08/16 22:42:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/16 22:42:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/16 22:42:30 Successful initial request to the apiserver, version: v1.22.0-rc.0
	2021/08/16 22:42:30 Generating JWE encryption key
	2021/08/16 22:42:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/16 22:42:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/16 22:42:31 Initializing JWE encryption key from synchronized object
	2021/08/16 22:42:31 Creating in-cluster Sidecar client
	2021/08/16 22:42:31 Serving insecurely on HTTP port: 9090
	2021/08/16 22:42:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:42:30 Starting overwatch
	
	* 
	* ==> storage-provisioner [89a458187aec6f7ac3766330b7b90b9fdf6aa67b998173a16665e7bc97631de2] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 89 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc0003e6ad0, 0x0)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc0003e6ac0)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc000416420, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc000372c80, 0x18e5530, 0xc0003e6c80, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0001a87c0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0001a87c0, 0x18b3d60, 0xc0001c0210, 0x1, 0xc000090e40)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0001a87c0, 0x3b9aca00, 0x0, 0x1, 0xc000090e40)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0001a87c0, 0x3b9aca00, 0xc000090e40)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:44:29.400104   19833 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: "\n** stderr ** \nError from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/no-preload/serial/Pause (107.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (167.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20210816223154-6986 --alsologtostderr -v=1
E0816 22:43:48.546751    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816222225-6986/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-20210816223154-6986 --alsologtostderr -v=1: exit status 80 (2.577659435s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-20210816223154-6986 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 22:43:48.047892   20101 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:43:48.047993   20101 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:43:48.048004   20101 out.go:311] Setting ErrFile to fd 2...
	I0816 22:43:48.048009   20101 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:43:48.048167   20101 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:43:48.048411   20101 out.go:305] Setting JSON to false
	I0816 22:43:48.048443   20101 mustload.go:65] Loading cluster: old-k8s-version-20210816223154-6986
	I0816 22:43:48.048913   20101 config.go:177] Loaded profile config "old-k8s-version-20210816223154-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0816 22:43:48.049992   20101 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:48.050050   20101 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:48.062370   20101 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40813
	I0816 22:43:48.062884   20101 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:48.063602   20101 main.go:130] libmachine: Using API Version  1
	I0816 22:43:48.063628   20101 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:48.064072   20101 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:48.064277   20101 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:43:48.067936   20101 host.go:66] Checking if "old-k8s-version-20210816223154-6986" exists ...
	I0816 22:43:48.068310   20101 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:48.068364   20101 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:48.081314   20101 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46579
	I0816 22:43:48.081777   20101 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:48.082351   20101 main.go:130] libmachine: Using API Version  1
	I0816 22:43:48.082379   20101 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:48.082793   20101 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:48.082966   20101 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .DriverName
	I0816 22:43:48.083721   20101 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-20210816223154-6986 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0816 22:43:48.087012   20101 out.go:177] * Pausing node old-k8s-version-20210816223154-6986 ... 
	I0816 22:43:48.087042   20101 host.go:66] Checking if "old-k8s-version-20210816223154-6986" exists ...
	I0816 22:43:48.087481   20101 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:48.087536   20101 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:48.099567   20101 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45489
	I0816 22:43:48.100011   20101 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:48.100577   20101 main.go:130] libmachine: Using API Version  1
	I0816 22:43:48.100600   20101 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:48.100969   20101 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:48.101147   20101 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .DriverName
	I0816 22:43:48.101381   20101 ssh_runner.go:149] Run: systemctl --version
	I0816 22:43:48.101409   20101 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHHostname
	I0816 22:43:48.107909   20101 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:43:48.108373   20101 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:23:71", ip: ""} in network mk-old-k8s-version-20210816223154-6986: {Iface:virbr6 ExpiryTime:2021-08-16 23:36:29 +0000 UTC Type:0 Mac:52:54:00:bf:23:71 Iaid: IPaddr:192.168.94.246 Prefix:24 Hostname:old-k8s-version-20210816223154-6986 Clientid:01:52:54:00:bf:23:71}
	I0816 22:43:48.108399   20101 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined IP address 192.168.94.246 and MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:43:48.108552   20101 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHPort
	I0816 22:43:48.108723   20101 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHKeyPath
	I0816 22:43:48.108877   20101 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHUsername
	I0816 22:43:48.109014   20101 sshutil.go:53] new ssh client: &{IP:192.168.94.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816223154-6986/id_rsa Username:docker}
	I0816 22:43:48.219665   20101 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:43:48.234179   20101 pause.go:50] kubelet running: true
	I0816 22:43:48.234257   20101 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:43:48.496365   20101 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:43:48.496475   20101 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:43:48.650460   20101 cri.go:76] found id: "7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b"
	I0816 22:43:48.650491   20101 cri.go:76] found id: "6ea2a5a98c7784a7ad7702acc262ab5efb1fb3b94ee224698a6ebf4024bc79d9"
	I0816 22:43:48.650499   20101 cri.go:76] found id: "c56f64c3fe77c258bef4bbf05b666c6e0f0b07c60a576cf8f489af832a850b70"
	I0816 22:43:48.650505   20101 cri.go:76] found id: "5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36"
	I0816 22:43:48.650510   20101 cri.go:76] found id: "1a95abd0c58fec2e599b7fce2863703efdb6bdba4e9f4ead191033cb41d2b50d"
	I0816 22:43:48.650515   20101 cri.go:76] found id: "1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d"
	I0816 22:43:48.650521   20101 cri.go:76] found id: "7237baa217ae7218864322d6893eb52cef6f4f80ba58f4e4a0683b2bb5861a85"
	I0816 22:43:48.650526   20101 cri.go:76] found id: "9672b955ced7e9fac049fbb30c5f91f4d0f06cda7931c585f9174c9e9693835e"
	I0816 22:43:48.650532   20101 cri.go:76] found id: "e30d5daacc87f690bc273d129f168487c51648dd2e6bc8d0870a0848215e02e9"
	I0816 22:43:48.650548   20101 cri.go:76] found id: ""
	I0816 22:43:48.650611   20101 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0816 22:43:48.708656   20101 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"12379a58cddde3b039f7bc0fceb518fa6e3feb0fb30d42e7fd083c4613b93da2","pid":7726,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12379a58cddde3b039f7bc0fceb518fa6e3feb0fb30d42e7fd083c4613b93da2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12379a58cddde3b039f7bc0fceb518fa6e3feb0fb30d42e7fd083c4613b93da2/rootfs","created":"2021-08-16T22:42:55.53703049Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"12379a58cddde3b039f7bc0fceb518fa6e3feb0fb30d42e7fd083c4613b93da2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-5d8978d65d-mblkl_4bdd50b5-fee3-11eb-bea8-525400bf2371"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1a95abd0c58fec2e599b7fce2863703efdb6bdba4e9f4ead191033cb41d2b50d","pid":6545,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a95abd0c5
8fec2e599b7fce2863703efdb6bdba4e9f4ead191033cb41d2b50d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a95abd0c58fec2e599b7fce2863703efdb6bdba4e9f4ead191033cb41d2b50d/rootfs","created":"2021-08-16T22:42:23.486492937Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d44da39af525283e271fe5e2c1d6b6ca8c6c5c4fbe472dc8018a26677fb22cfe"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d","pid":6531,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d/rootfs","created":"2021-08-16T22:42:23.482402604Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","
io.kubernetes.cri.sandbox-id":"d8a20faccc88683a91a38b04fce625884ff0902e3655da6b1799f8bd0bc2805d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2a86821519acf60a4fa28241931865c536eb6fd06d8b333b1c0cdb3d9e4b2472","pid":7340,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a86821519acf60a4fa28241931865c536eb6fd06d8b333b1c0cdb3d9e4b2472","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a86821519acf60a4fa28241931865c536eb6fd06d8b333b1c0cdb3d9e4b2472/rootfs","created":"2021-08-16T22:42:53.543529247Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2a86821519acf60a4fa28241931865c536eb6fd06d8b333b1c0cdb3d9e4b2472","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_4a660c8a-fee3-11eb-bea8-525400bf2371"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2f7b780c4307b02ef87b4146fa91f3e11360d6db0cbc012d1cc73f4b3bf9f70c","pid":7546,"status":"running","bundle":"/run/containerd/io.containerd.runt
ime.v2.task/k8s.io/2f7b780c4307b02ef87b4146fa91f3e11360d6db0cbc012d1cc73f4b3bf9f70c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f7b780c4307b02ef87b4146fa91f3e11360d6db0cbc012d1cc73f4b3bf9f70c/rootfs","created":"2021-08-16T22:42:54.624092758Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2f7b780c4307b02ef87b4146fa91f3e11360d6db0cbc012d1cc73f4b3bf9f70c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-5b494cc544-nznrc_4b4f43e1-fee3-11eb-bea8-525400bf2371"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a1b01ca39e0d7fb0b2bde627216df6d80705cf380bb968986bbab4dc7c9c402","pid":6925,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a1b01ca39e0d7fb0b2bde627216df6d80705cf380bb968986bbab4dc7c9c402","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a1b01ca39e0d7fb0b2bde627216df6d80705cf380bb968986bbab4dc7c9c402/rootfs","created":"2021-08-16T22:42:50.63
3871657Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3a1b01ca39e0d7fb0b2bde627216df6d80705cf380bb968986bbab4dc7c9c402","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-jmg6d_4905cd4b-fee3-11eb-bea8-525400bf2371"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a4ddc9391f6bbf393f2f69a914b5280c8cf2430956fe3b1d73869edb2579212","pid":6318,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a4ddc9391f6bbf393f2f69a914b5280c8cf2430956fe3b1d73869edb2579212","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a4ddc9391f6bbf393f2f69a914b5280c8cf2430956fe3b1d73869edb2579212/rootfs","created":"2021-08-16T22:42:22.694886933Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3a4ddc9391f6bbf393f2f69a914b5280c8cf2430956fe3b1d73869edb2579212","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-old-k8s-version-20210816223154-6986_7
93a4188543ad631a78be72704d73ea2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"40db6b8d20e4bd736d3b67a9f90c1c18618ce7e7920a19a8dbc53971d69eacb5","pid":7635,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/40db6b8d20e4bd736d3b67a9f90c1c18618ce7e7920a19a8dbc53971d69eacb5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/40db6b8d20e4bd736d3b67a9f90c1c18618ce7e7920a19a8dbc53971d69eacb5/rootfs","created":"2021-08-16T22:42:54.780038182Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"40db6b8d20e4bd736d3b67a9f90c1c18618ce7e7920a19a8dbc53971d69eacb5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-8546d8b77b-vpvp5_4b7525e3-fee3-11eb-bea8-525400bf2371"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"59dc9b6f995b1a58ca9462c6b8f16090e1eed37fd67efe185a9750e039d2012c","pid":6400,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/59dc9b6f995b1a58ca9462c6b8f16090e
1eed37fd67efe185a9750e039d2012c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/59dc9b6f995b1a58ca9462c6b8f16090e1eed37fd67efe185a9750e039d2012c/rootfs","created":"2021-08-16T22:42:22.981344745Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"59dc9b6f995b1a58ca9462c6b8f16090e1eed37fd67efe185a9750e039d2012c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-old-k8s-version-20210816223154-6986_999695c487c3d26dbecc6adddcd12120"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36","pid":6579,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36/rootfs","created":"2021-08-16T22:42:24.802023848Z","annotations":{"io.kubernetes.cri.container-name":"etcd",
"io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"59dc9b6f995b1a58ca9462c6b8f16090e1eed37fd67efe185a9750e039d2012c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6ea2a5a98c7784a7ad7702acc262ab5efb1fb3b94ee224698a6ebf4024bc79d9","pid":7319,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ea2a5a98c7784a7ad7702acc262ab5efb1fb3b94ee224698a6ebf4024bc79d9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ea2a5a98c7784a7ad7702acc262ab5efb1fb3b94ee224698a6ebf4024bc79d9/rootfs","created":"2021-08-16T22:42:53.206035797Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e40105c5a7146568f75a2427e1b2e99fb1b9de02d027483cf5917e9baf277fa8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7237baa217ae7218864322d6893eb52cef6f4f80ba58f4e4a0683b2bb5861a85","pid":6453,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7237baa217ae7
218864322d6893eb52cef6f4f80ba58f4e4a0683b2bb5861a85","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7237baa217ae7218864322d6893eb52cef6f4f80ba58f4e4a0683b2bb5861a85/rootfs","created":"2021-08-16T22:42:23.092003412Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3a4ddc9391f6bbf393f2f69a914b5280c8cf2430956fe3b1d73869edb2579212"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b","pid":7404,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b/rootfs","created":"2021-08-16T22:42:53.953742775Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kube
rnetes.cri.sandbox-id":"2a86821519acf60a4fa28241931865c536eb6fd06d8b333b1c0cdb3d9e4b2472"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c56f64c3fe77c258bef4bbf05b666c6e0f0b07c60a576cf8f489af832a850b70","pid":6975,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c56f64c3fe77c258bef4bbf05b666c6e0f0b07c60a576cf8f489af832a850b70","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c56f64c3fe77c258bef4bbf05b666c6e0f0b07c60a576cf8f489af832a850b70/rootfs","created":"2021-08-16T22:42:50.918927055Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3a1b01ca39e0d7fb0b2bde627216df6d80705cf380bb968986bbab4dc7c9c402"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d44da39af525283e271fe5e2c1d6b6ca8c6c5c4fbe472dc8018a26677fb22cfe","pid":6388,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d44da39af525283e271fe5e2c1d6b6ca8c6c5c4fbe472dc8018a26677fb22cfe"
,"rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d44da39af525283e271fe5e2c1d6b6ca8c6c5c4fbe472dc8018a26677fb22cfe/rootfs","created":"2021-08-16T22:42:22.870472464Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d44da39af525283e271fe5e2c1d6b6ca8c6c5c4fbe472dc8018a26677fb22cfe","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-20210816223154-6986_b42c3e5fa8e81a5a78a3a372f8953126"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d8a20faccc88683a91a38b04fce625884ff0902e3655da6b1799f8bd0bc2805d","pid":6373,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8a20faccc88683a91a38b04fce625884ff0902e3655da6b1799f8bd0bc2805d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8a20faccc88683a91a38b04fce625884ff0902e3655da6b1799f8bd0bc2805d/rootfs","created":"2021-08-16T22:42:22.878109147Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubern
etes.cri.sandbox-id":"d8a20faccc88683a91a38b04fce625884ff0902e3655da6b1799f8bd0bc2805d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-old-k8s-version-20210816223154-6986_ba371a1cc55ef6aa89a1ba4554611582"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e30d5daacc87f690bc273d129f168487c51648dd2e6bc8d0870a0848215e02e9","pid":7782,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e30d5daacc87f690bc273d129f168487c51648dd2e6bc8d0870a0848215e02e9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e30d5daacc87f690bc273d129f168487c51648dd2e6bc8d0870a0848215e02e9/rootfs","created":"2021-08-16T22:42:56.116873996Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"12379a58cddde3b039f7bc0fceb518fa6e3feb0fb30d42e7fd083c4613b93da2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e40105c5a7146568f75a2427e1b2e99fb1b9de02d027483cf5917e9baf277f
a8","pid":7257,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e40105c5a7146568f75a2427e1b2e99fb1b9de02d027483cf5917e9baf277fa8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e40105c5a7146568f75a2427e1b2e99fb1b9de02d027483cf5917e9baf277fa8/rootfs","created":"2021-08-16T22:42:52.759803807Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e40105c5a7146568f75a2427e1b2e99fb1b9de02d027483cf5917e9baf277fa8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-fb8b8dccf-r87qj_48ebe67d-fee3-11eb-bea8-525400bf2371"},"owner":"root"}]
	I0816 22:43:48.708933   20101 cri.go:113] list returned 18 containers
	I0816 22:43:48.708954   20101 cri.go:116] container: {ID:12379a58cddde3b039f7bc0fceb518fa6e3feb0fb30d42e7fd083c4613b93da2 Status:running}
	I0816 22:43:48.708983   20101 cri.go:118] skipping 12379a58cddde3b039f7bc0fceb518fa6e3feb0fb30d42e7fd083c4613b93da2 - not in ps
	I0816 22:43:48.708987   20101 cri.go:116] container: {ID:1a95abd0c58fec2e599b7fce2863703efdb6bdba4e9f4ead191033cb41d2b50d Status:running}
	I0816 22:43:48.708992   20101 cri.go:116] container: {ID:1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d Status:running}
	I0816 22:43:48.708997   20101 cri.go:116] container: {ID:2a86821519acf60a4fa28241931865c536eb6fd06d8b333b1c0cdb3d9e4b2472 Status:running}
	I0816 22:43:48.709008   20101 cri.go:118] skipping 2a86821519acf60a4fa28241931865c536eb6fd06d8b333b1c0cdb3d9e4b2472 - not in ps
	I0816 22:43:48.709011   20101 cri.go:116] container: {ID:2f7b780c4307b02ef87b4146fa91f3e11360d6db0cbc012d1cc73f4b3bf9f70c Status:running}
	I0816 22:43:48.709024   20101 cri.go:118] skipping 2f7b780c4307b02ef87b4146fa91f3e11360d6db0cbc012d1cc73f4b3bf9f70c - not in ps
	I0816 22:43:48.709029   20101 cri.go:116] container: {ID:3a1b01ca39e0d7fb0b2bde627216df6d80705cf380bb968986bbab4dc7c9c402 Status:running}
	I0816 22:43:48.709035   20101 cri.go:118] skipping 3a1b01ca39e0d7fb0b2bde627216df6d80705cf380bb968986bbab4dc7c9c402 - not in ps
	I0816 22:43:48.709041   20101 cri.go:116] container: {ID:3a4ddc9391f6bbf393f2f69a914b5280c8cf2430956fe3b1d73869edb2579212 Status:running}
	I0816 22:43:48.709048   20101 cri.go:118] skipping 3a4ddc9391f6bbf393f2f69a914b5280c8cf2430956fe3b1d73869edb2579212 - not in ps
	I0816 22:43:48.709053   20101 cri.go:116] container: {ID:40db6b8d20e4bd736d3b67a9f90c1c18618ce7e7920a19a8dbc53971d69eacb5 Status:running}
	I0816 22:43:48.709061   20101 cri.go:118] skipping 40db6b8d20e4bd736d3b67a9f90c1c18618ce7e7920a19a8dbc53971d69eacb5 - not in ps
	I0816 22:43:48.709068   20101 cri.go:116] container: {ID:59dc9b6f995b1a58ca9462c6b8f16090e1eed37fd67efe185a9750e039d2012c Status:running}
	I0816 22:43:48.709076   20101 cri.go:118] skipping 59dc9b6f995b1a58ca9462c6b8f16090e1eed37fd67efe185a9750e039d2012c - not in ps
	I0816 22:43:48.709084   20101 cri.go:116] container: {ID:5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36 Status:running}
	I0816 22:43:48.709093   20101 cri.go:116] container: {ID:6ea2a5a98c7784a7ad7702acc262ab5efb1fb3b94ee224698a6ebf4024bc79d9 Status:running}
	I0816 22:43:48.709102   20101 cri.go:116] container: {ID:7237baa217ae7218864322d6893eb52cef6f4f80ba58f4e4a0683b2bb5861a85 Status:running}
	I0816 22:43:48.709116   20101 cri.go:116] container: {ID:7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b Status:running}
	I0816 22:43:48.709125   20101 cri.go:116] container: {ID:c56f64c3fe77c258bef4bbf05b666c6e0f0b07c60a576cf8f489af832a850b70 Status:running}
	I0816 22:43:48.709132   20101 cri.go:116] container: {ID:d44da39af525283e271fe5e2c1d6b6ca8c6c5c4fbe472dc8018a26677fb22cfe Status:running}
	I0816 22:43:48.709140   20101 cri.go:118] skipping d44da39af525283e271fe5e2c1d6b6ca8c6c5c4fbe472dc8018a26677fb22cfe - not in ps
	I0816 22:43:48.709149   20101 cri.go:116] container: {ID:d8a20faccc88683a91a38b04fce625884ff0902e3655da6b1799f8bd0bc2805d Status:running}
	I0816 22:43:48.709155   20101 cri.go:118] skipping d8a20faccc88683a91a38b04fce625884ff0902e3655da6b1799f8bd0bc2805d - not in ps
	I0816 22:43:48.709163   20101 cri.go:116] container: {ID:e30d5daacc87f690bc273d129f168487c51648dd2e6bc8d0870a0848215e02e9 Status:running}
	I0816 22:43:48.709167   20101 cri.go:116] container: {ID:e40105c5a7146568f75a2427e1b2e99fb1b9de02d027483cf5917e9baf277fa8 Status:running}
	I0816 22:43:48.709171   20101 cri.go:118] skipping e40105c5a7146568f75a2427e1b2e99fb1b9de02d027483cf5917e9baf277fa8 - not in ps
	I0816 22:43:48.709214   20101 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 1a95abd0c58fec2e599b7fce2863703efdb6bdba4e9f4ead191033cb41d2b50d
	I0816 22:43:48.735448   20101 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 1a95abd0c58fec2e599b7fce2863703efdb6bdba4e9f4ead191033cb41d2b50d 1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d
	I0816 22:43:48.767241   20101 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 1a95abd0c58fec2e599b7fce2863703efdb6bdba4e9f4ead191033cb41d2b50d 1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:43:48Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0816 22:43:49.043746   20101 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:43:49.060425   20101 pause.go:50] kubelet running: false
	I0816 22:43:49.060493   20101 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:43:49.287146   20101 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:43:49.287249   20101 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:43:49.454767   20101 cri.go:76] found id: "7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b"
	I0816 22:43:49.454803   20101 cri.go:76] found id: "6ea2a5a98c7784a7ad7702acc262ab5efb1fb3b94ee224698a6ebf4024bc79d9"
	I0816 22:43:49.454810   20101 cri.go:76] found id: "c56f64c3fe77c258bef4bbf05b666c6e0f0b07c60a576cf8f489af832a850b70"
	I0816 22:43:49.454816   20101 cri.go:76] found id: "5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36"
	I0816 22:43:49.454821   20101 cri.go:76] found id: "1a95abd0c58fec2e599b7fce2863703efdb6bdba4e9f4ead191033cb41d2b50d"
	I0816 22:43:49.454827   20101 cri.go:76] found id: "1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d"
	I0816 22:43:49.454832   20101 cri.go:76] found id: "7237baa217ae7218864322d6893eb52cef6f4f80ba58f4e4a0683b2bb5861a85"
	I0816 22:43:49.454837   20101 cri.go:76] found id: "9672b955ced7e9fac049fbb30c5f91f4d0f06cda7931c585f9174c9e9693835e"
	I0816 22:43:49.454856   20101 cri.go:76] found id: "e30d5daacc87f690bc273d129f168487c51648dd2e6bc8d0870a0848215e02e9"
	I0816 22:43:49.454870   20101 cri.go:76] found id: ""
	I0816 22:43:49.454921   20101 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0816 22:43:49.508365   20101 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"12379a58cddde3b039f7bc0fceb518fa6e3feb0fb30d42e7fd083c4613b93da2","pid":7726,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12379a58cddde3b039f7bc0fceb518fa6e3feb0fb30d42e7fd083c4613b93da2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12379a58cddde3b039f7bc0fceb518fa6e3feb0fb30d42e7fd083c4613b93da2/rootfs","created":"2021-08-16T22:42:55.53703049Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"12379a58cddde3b039f7bc0fceb518fa6e3feb0fb30d42e7fd083c4613b93da2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-5d8978d65d-mblkl_4bdd50b5-fee3-11eb-bea8-525400bf2371"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1a95abd0c58fec2e599b7fce2863703efdb6bdba4e9f4ead191033cb41d2b50d","pid":6545,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a95abd0c58
fec2e599b7fce2863703efdb6bdba4e9f4ead191033cb41d2b50d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a95abd0c58fec2e599b7fce2863703efdb6bdba4e9f4ead191033cb41d2b50d/rootfs","created":"2021-08-16T22:42:23.486492937Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d44da39af525283e271fe5e2c1d6b6ca8c6c5c4fbe472dc8018a26677fb22cfe"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d","pid":6531,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d/rootfs","created":"2021-08-16T22:42:23.482402604Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","i
o.kubernetes.cri.sandbox-id":"d8a20faccc88683a91a38b04fce625884ff0902e3655da6b1799f8bd0bc2805d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2a86821519acf60a4fa28241931865c536eb6fd06d8b333b1c0cdb3d9e4b2472","pid":7340,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a86821519acf60a4fa28241931865c536eb6fd06d8b333b1c0cdb3d9e4b2472","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a86821519acf60a4fa28241931865c536eb6fd06d8b333b1c0cdb3d9e4b2472/rootfs","created":"2021-08-16T22:42:53.543529247Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2a86821519acf60a4fa28241931865c536eb6fd06d8b333b1c0cdb3d9e4b2472","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_4a660c8a-fee3-11eb-bea8-525400bf2371"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2f7b780c4307b02ef87b4146fa91f3e11360d6db0cbc012d1cc73f4b3bf9f70c","pid":7546,"status":"running","bundle":"/run/containerd/io.containerd.runti
me.v2.task/k8s.io/2f7b780c4307b02ef87b4146fa91f3e11360d6db0cbc012d1cc73f4b3bf9f70c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f7b780c4307b02ef87b4146fa91f3e11360d6db0cbc012d1cc73f4b3bf9f70c/rootfs","created":"2021-08-16T22:42:54.624092758Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2f7b780c4307b02ef87b4146fa91f3e11360d6db0cbc012d1cc73f4b3bf9f70c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-5b494cc544-nznrc_4b4f43e1-fee3-11eb-bea8-525400bf2371"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a1b01ca39e0d7fb0b2bde627216df6d80705cf380bb968986bbab4dc7c9c402","pid":6925,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a1b01ca39e0d7fb0b2bde627216df6d80705cf380bb968986bbab4dc7c9c402","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a1b01ca39e0d7fb0b2bde627216df6d80705cf380bb968986bbab4dc7c9c402/rootfs","created":"2021-08-16T22:42:50.633
871657Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3a1b01ca39e0d7fb0b2bde627216df6d80705cf380bb968986bbab4dc7c9c402","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-jmg6d_4905cd4b-fee3-11eb-bea8-525400bf2371"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a4ddc9391f6bbf393f2f69a914b5280c8cf2430956fe3b1d73869edb2579212","pid":6318,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a4ddc9391f6bbf393f2f69a914b5280c8cf2430956fe3b1d73869edb2579212","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a4ddc9391f6bbf393f2f69a914b5280c8cf2430956fe3b1d73869edb2579212/rootfs","created":"2021-08-16T22:42:22.694886933Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3a4ddc9391f6bbf393f2f69a914b5280c8cf2430956fe3b1d73869edb2579212","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-old-k8s-version-20210816223154-6986_79
3a4188543ad631a78be72704d73ea2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"40db6b8d20e4bd736d3b67a9f90c1c18618ce7e7920a19a8dbc53971d69eacb5","pid":7635,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/40db6b8d20e4bd736d3b67a9f90c1c18618ce7e7920a19a8dbc53971d69eacb5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/40db6b8d20e4bd736d3b67a9f90c1c18618ce7e7920a19a8dbc53971d69eacb5/rootfs","created":"2021-08-16T22:42:54.780038182Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"40db6b8d20e4bd736d3b67a9f90c1c18618ce7e7920a19a8dbc53971d69eacb5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-8546d8b77b-vpvp5_4b7525e3-fee3-11eb-bea8-525400bf2371"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"59dc9b6f995b1a58ca9462c6b8f16090e1eed37fd67efe185a9750e039d2012c","pid":6400,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/59dc9b6f995b1a58ca9462c6b8f16090e1
eed37fd67efe185a9750e039d2012c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/59dc9b6f995b1a58ca9462c6b8f16090e1eed37fd67efe185a9750e039d2012c/rootfs","created":"2021-08-16T22:42:22.981344745Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"59dc9b6f995b1a58ca9462c6b8f16090e1eed37fd67efe185a9750e039d2012c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-old-k8s-version-20210816223154-6986_999695c487c3d26dbecc6adddcd12120"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36","pid":6579,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36/rootfs","created":"2021-08-16T22:42:24.802023848Z","annotations":{"io.kubernetes.cri.container-name":"etcd","
io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"59dc9b6f995b1a58ca9462c6b8f16090e1eed37fd67efe185a9750e039d2012c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6ea2a5a98c7784a7ad7702acc262ab5efb1fb3b94ee224698a6ebf4024bc79d9","pid":7319,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ea2a5a98c7784a7ad7702acc262ab5efb1fb3b94ee224698a6ebf4024bc79d9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ea2a5a98c7784a7ad7702acc262ab5efb1fb3b94ee224698a6ebf4024bc79d9/rootfs","created":"2021-08-16T22:42:53.206035797Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e40105c5a7146568f75a2427e1b2e99fb1b9de02d027483cf5917e9baf277fa8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7237baa217ae7218864322d6893eb52cef6f4f80ba58f4e4a0683b2bb5861a85","pid":6453,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7237baa217ae72
18864322d6893eb52cef6f4f80ba58f4e4a0683b2bb5861a85","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7237baa217ae7218864322d6893eb52cef6f4f80ba58f4e4a0683b2bb5861a85/rootfs","created":"2021-08-16T22:42:23.092003412Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3a4ddc9391f6bbf393f2f69a914b5280c8cf2430956fe3b1d73869edb2579212"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b","pid":7404,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b/rootfs","created":"2021-08-16T22:42:53.953742775Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kuber
netes.cri.sandbox-id":"2a86821519acf60a4fa28241931865c536eb6fd06d8b333b1c0cdb3d9e4b2472"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c56f64c3fe77c258bef4bbf05b666c6e0f0b07c60a576cf8f489af832a850b70","pid":6975,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c56f64c3fe77c258bef4bbf05b666c6e0f0b07c60a576cf8f489af832a850b70","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c56f64c3fe77c258bef4bbf05b666c6e0f0b07c60a576cf8f489af832a850b70/rootfs","created":"2021-08-16T22:42:50.918927055Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3a1b01ca39e0d7fb0b2bde627216df6d80705cf380bb968986bbab4dc7c9c402"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d44da39af525283e271fe5e2c1d6b6ca8c6c5c4fbe472dc8018a26677fb22cfe","pid":6388,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d44da39af525283e271fe5e2c1d6b6ca8c6c5c4fbe472dc8018a26677fb22cfe",
"rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d44da39af525283e271fe5e2c1d6b6ca8c6c5c4fbe472dc8018a26677fb22cfe/rootfs","created":"2021-08-16T22:42:22.870472464Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d44da39af525283e271fe5e2c1d6b6ca8c6c5c4fbe472dc8018a26677fb22cfe","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-20210816223154-6986_b42c3e5fa8e81a5a78a3a372f8953126"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d8a20faccc88683a91a38b04fce625884ff0902e3655da6b1799f8bd0bc2805d","pid":6373,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8a20faccc88683a91a38b04fce625884ff0902e3655da6b1799f8bd0bc2805d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8a20faccc88683a91a38b04fce625884ff0902e3655da6b1799f8bd0bc2805d/rootfs","created":"2021-08-16T22:42:22.878109147Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kuberne
tes.cri.sandbox-id":"d8a20faccc88683a91a38b04fce625884ff0902e3655da6b1799f8bd0bc2805d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-old-k8s-version-20210816223154-6986_ba371a1cc55ef6aa89a1ba4554611582"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e30d5daacc87f690bc273d129f168487c51648dd2e6bc8d0870a0848215e02e9","pid":7782,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e30d5daacc87f690bc273d129f168487c51648dd2e6bc8d0870a0848215e02e9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e30d5daacc87f690bc273d129f168487c51648dd2e6bc8d0870a0848215e02e9/rootfs","created":"2021-08-16T22:42:56.116873996Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"12379a58cddde3b039f7bc0fceb518fa6e3feb0fb30d42e7fd083c4613b93da2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e40105c5a7146568f75a2427e1b2e99fb1b9de02d027483cf5917e9baf277fa
8","pid":7257,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e40105c5a7146568f75a2427e1b2e99fb1b9de02d027483cf5917e9baf277fa8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e40105c5a7146568f75a2427e1b2e99fb1b9de02d027483cf5917e9baf277fa8/rootfs","created":"2021-08-16T22:42:52.759803807Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e40105c5a7146568f75a2427e1b2e99fb1b9de02d027483cf5917e9baf277fa8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-fb8b8dccf-r87qj_48ebe67d-fee3-11eb-bea8-525400bf2371"},"owner":"root"}]
	I0816 22:43:49.508698   20101 cri.go:113] list returned 18 containers
	I0816 22:43:49.508725   20101 cri.go:116] container: {ID:12379a58cddde3b039f7bc0fceb518fa6e3feb0fb30d42e7fd083c4613b93da2 Status:running}
	I0816 22:43:49.508739   20101 cri.go:118] skipping 12379a58cddde3b039f7bc0fceb518fa6e3feb0fb30d42e7fd083c4613b93da2 - not in ps
	I0816 22:43:49.508746   20101 cri.go:116] container: {ID:1a95abd0c58fec2e599b7fce2863703efdb6bdba4e9f4ead191033cb41d2b50d Status:paused}
	I0816 22:43:49.508754   20101 cri.go:122] skipping {1a95abd0c58fec2e599b7fce2863703efdb6bdba4e9f4ead191033cb41d2b50d paused}: state = "paused", want "running"
	I0816 22:43:49.508769   20101 cri.go:116] container: {ID:1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d Status:running}
	I0816 22:43:49.508777   20101 cri.go:116] container: {ID:2a86821519acf60a4fa28241931865c536eb6fd06d8b333b1c0cdb3d9e4b2472 Status:running}
	I0816 22:43:49.508790   20101 cri.go:118] skipping 2a86821519acf60a4fa28241931865c536eb6fd06d8b333b1c0cdb3d9e4b2472 - not in ps
	I0816 22:43:49.508796   20101 cri.go:116] container: {ID:2f7b780c4307b02ef87b4146fa91f3e11360d6db0cbc012d1cc73f4b3bf9f70c Status:running}
	I0816 22:43:49.508804   20101 cri.go:118] skipping 2f7b780c4307b02ef87b4146fa91f3e11360d6db0cbc012d1cc73f4b3bf9f70c - not in ps
	I0816 22:43:49.508810   20101 cri.go:116] container: {ID:3a1b01ca39e0d7fb0b2bde627216df6d80705cf380bb968986bbab4dc7c9c402 Status:running}
	I0816 22:43:49.508825   20101 cri.go:118] skipping 3a1b01ca39e0d7fb0b2bde627216df6d80705cf380bb968986bbab4dc7c9c402 - not in ps
	I0816 22:43:49.508834   20101 cri.go:116] container: {ID:3a4ddc9391f6bbf393f2f69a914b5280c8cf2430956fe3b1d73869edb2579212 Status:running}
	I0816 22:43:49.508841   20101 cri.go:118] skipping 3a4ddc9391f6bbf393f2f69a914b5280c8cf2430956fe3b1d73869edb2579212 - not in ps
	I0816 22:43:49.508853   20101 cri.go:116] container: {ID:40db6b8d20e4bd736d3b67a9f90c1c18618ce7e7920a19a8dbc53971d69eacb5 Status:running}
	I0816 22:43:49.508861   20101 cri.go:118] skipping 40db6b8d20e4bd736d3b67a9f90c1c18618ce7e7920a19a8dbc53971d69eacb5 - not in ps
	I0816 22:43:49.508866   20101 cri.go:116] container: {ID:59dc9b6f995b1a58ca9462c6b8f16090e1eed37fd67efe185a9750e039d2012c Status:running}
	I0816 22:43:49.508873   20101 cri.go:118] skipping 59dc9b6f995b1a58ca9462c6b8f16090e1eed37fd67efe185a9750e039d2012c - not in ps
	I0816 22:43:49.508884   20101 cri.go:116] container: {ID:5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36 Status:running}
	I0816 22:43:49.508891   20101 cri.go:116] container: {ID:6ea2a5a98c7784a7ad7702acc262ab5efb1fb3b94ee224698a6ebf4024bc79d9 Status:running}
	I0816 22:43:49.508896   20101 cri.go:116] container: {ID:7237baa217ae7218864322d6893eb52cef6f4f80ba58f4e4a0683b2bb5861a85 Status:running}
	I0816 22:43:49.508902   20101 cri.go:116] container: {ID:7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b Status:running}
	I0816 22:43:49.508909   20101 cri.go:116] container: {ID:c56f64c3fe77c258bef4bbf05b666c6e0f0b07c60a576cf8f489af832a850b70 Status:running}
	I0816 22:43:49.508922   20101 cri.go:116] container: {ID:d44da39af525283e271fe5e2c1d6b6ca8c6c5c4fbe472dc8018a26677fb22cfe Status:running}
	I0816 22:43:49.508932   20101 cri.go:118] skipping d44da39af525283e271fe5e2c1d6b6ca8c6c5c4fbe472dc8018a26677fb22cfe - not in ps
	I0816 22:43:49.508937   20101 cri.go:116] container: {ID:d8a20faccc88683a91a38b04fce625884ff0902e3655da6b1799f8bd0bc2805d Status:running}
	I0816 22:43:49.508943   20101 cri.go:118] skipping d8a20faccc88683a91a38b04fce625884ff0902e3655da6b1799f8bd0bc2805d - not in ps
	I0816 22:43:49.508948   20101 cri.go:116] container: {ID:e30d5daacc87f690bc273d129f168487c51648dd2e6bc8d0870a0848215e02e9 Status:running}
	I0816 22:43:49.508961   20101 cri.go:116] container: {ID:e40105c5a7146568f75a2427e1b2e99fb1b9de02d027483cf5917e9baf277fa8 Status:running}
	I0816 22:43:49.508970   20101 cri.go:118] skipping e40105c5a7146568f75a2427e1b2e99fb1b9de02d027483cf5917e9baf277fa8 - not in ps
	I0816 22:43:49.509027   20101 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d
	I0816 22:43:49.536855   20101 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d 5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36
	I0816 22:43:49.560780   20101 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d 5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:43:49Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0816 22:43:50.101494   20101 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:43:50.116341   20101 pause.go:50] kubelet running: false
	I0816 22:43:50.116449   20101 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:43:50.309134   20101 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:43:50.309236   20101 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:43:50.445266   20101 cri.go:76] found id: "7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b"
	I0816 22:43:50.445297   20101 cri.go:76] found id: "6ea2a5a98c7784a7ad7702acc262ab5efb1fb3b94ee224698a6ebf4024bc79d9"
	I0816 22:43:50.445305   20101 cri.go:76] found id: "c56f64c3fe77c258bef4bbf05b666c6e0f0b07c60a576cf8f489af832a850b70"
	I0816 22:43:50.445310   20101 cri.go:76] found id: "5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36"
	I0816 22:43:50.445315   20101 cri.go:76] found id: "1a95abd0c58fec2e599b7fce2863703efdb6bdba4e9f4ead191033cb41d2b50d"
	I0816 22:43:50.445322   20101 cri.go:76] found id: "1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d"
	I0816 22:43:50.445327   20101 cri.go:76] found id: "7237baa217ae7218864322d6893eb52cef6f4f80ba58f4e4a0683b2bb5861a85"
	I0816 22:43:50.445332   20101 cri.go:76] found id: "9672b955ced7e9fac049fbb30c5f91f4d0f06cda7931c585f9174c9e9693835e"
	I0816 22:43:50.445338   20101 cri.go:76] found id: "e30d5daacc87f690bc273d129f168487c51648dd2e6bc8d0870a0848215e02e9"
	I0816 22:43:50.445346   20101 cri.go:76] found id: ""
	I0816 22:43:50.445403   20101 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0816 22:43:50.487719   20101 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"12379a58cddde3b039f7bc0fceb518fa6e3feb0fb30d42e7fd083c4613b93da2","pid":7726,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12379a58cddde3b039f7bc0fceb518fa6e3feb0fb30d42e7fd083c4613b93da2","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/12379a58cddde3b039f7bc0fceb518fa6e3feb0fb30d42e7fd083c4613b93da2/rootfs","created":"2021-08-16T22:42:55.53703049Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"12379a58cddde3b039f7bc0fceb518fa6e3feb0fb30d42e7fd083c4613b93da2","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-5d8978d65d-mblkl_4bdd50b5-fee3-11eb-bea8-525400bf2371"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1a95abd0c58fec2e599b7fce2863703efdb6bdba4e9f4ead191033cb41d2b50d","pid":6545,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a95abd0c58
fec2e599b7fce2863703efdb6bdba4e9f4ead191033cb41d2b50d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a95abd0c58fec2e599b7fce2863703efdb6bdba4e9f4ead191033cb41d2b50d/rootfs","created":"2021-08-16T22:42:23.486492937Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"d44da39af525283e271fe5e2c1d6b6ca8c6c5c4fbe472dc8018a26677fb22cfe"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d","pid":6531,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d/rootfs","created":"2021-08-16T22:42:23.482402604Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io
.kubernetes.cri.sandbox-id":"d8a20faccc88683a91a38b04fce625884ff0902e3655da6b1799f8bd0bc2805d"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2a86821519acf60a4fa28241931865c536eb6fd06d8b333b1c0cdb3d9e4b2472","pid":7340,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a86821519acf60a4fa28241931865c536eb6fd06d8b333b1c0cdb3d9e4b2472","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a86821519acf60a4fa28241931865c536eb6fd06d8b333b1c0cdb3d9e4b2472/rootfs","created":"2021-08-16T22:42:53.543529247Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2a86821519acf60a4fa28241931865c536eb6fd06d8b333b1c0cdb3d9e4b2472","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_4a660c8a-fee3-11eb-bea8-525400bf2371"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"2f7b780c4307b02ef87b4146fa91f3e11360d6db0cbc012d1cc73f4b3bf9f70c","pid":7546,"status":"running","bundle":"/run/containerd/io.containerd.runtim
e.v2.task/k8s.io/2f7b780c4307b02ef87b4146fa91f3e11360d6db0cbc012d1cc73f4b3bf9f70c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f7b780c4307b02ef87b4146fa91f3e11360d6db0cbc012d1cc73f4b3bf9f70c/rootfs","created":"2021-08-16T22:42:54.624092758Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"2f7b780c4307b02ef87b4146fa91f3e11360d6db0cbc012d1cc73f4b3bf9f70c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-5b494cc544-nznrc_4b4f43e1-fee3-11eb-bea8-525400bf2371"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a1b01ca39e0d7fb0b2bde627216df6d80705cf380bb968986bbab4dc7c9c402","pid":6925,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a1b01ca39e0d7fb0b2bde627216df6d80705cf380bb968986bbab4dc7c9c402","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a1b01ca39e0d7fb0b2bde627216df6d80705cf380bb968986bbab4dc7c9c402/rootfs","created":"2021-08-16T22:42:50.6338
71657Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3a1b01ca39e0d7fb0b2bde627216df6d80705cf380bb968986bbab4dc7c9c402","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-jmg6d_4905cd4b-fee3-11eb-bea8-525400bf2371"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a4ddc9391f6bbf393f2f69a914b5280c8cf2430956fe3b1d73869edb2579212","pid":6318,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a4ddc9391f6bbf393f2f69a914b5280c8cf2430956fe3b1d73869edb2579212","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a4ddc9391f6bbf393f2f69a914b5280c8cf2430956fe3b1d73869edb2579212/rootfs","created":"2021-08-16T22:42:22.694886933Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3a4ddc9391f6bbf393f2f69a914b5280c8cf2430956fe3b1d73869edb2579212","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-old-k8s-version-20210816223154-6986_793
a4188543ad631a78be72704d73ea2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"40db6b8d20e4bd736d3b67a9f90c1c18618ce7e7920a19a8dbc53971d69eacb5","pid":7635,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/40db6b8d20e4bd736d3b67a9f90c1c18618ce7e7920a19a8dbc53971d69eacb5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/40db6b8d20e4bd736d3b67a9f90c1c18618ce7e7920a19a8dbc53971d69eacb5/rootfs","created":"2021-08-16T22:42:54.780038182Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"40db6b8d20e4bd736d3b67a9f90c1c18618ce7e7920a19a8dbc53971d69eacb5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-8546d8b77b-vpvp5_4b7525e3-fee3-11eb-bea8-525400bf2371"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"59dc9b6f995b1a58ca9462c6b8f16090e1eed37fd67efe185a9750e039d2012c","pid":6400,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/59dc9b6f995b1a58ca9462c6b8f16090e1e
ed37fd67efe185a9750e039d2012c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/59dc9b6f995b1a58ca9462c6b8f16090e1eed37fd67efe185a9750e039d2012c/rootfs","created":"2021-08-16T22:42:22.981344745Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"59dc9b6f995b1a58ca9462c6b8f16090e1eed37fd67efe185a9750e039d2012c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-old-k8s-version-20210816223154-6986_999695c487c3d26dbecc6adddcd12120"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36","pid":6579,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36/rootfs","created":"2021-08-16T22:42:24.802023848Z","annotations":{"io.kubernetes.cri.container-name":"etcd","i
o.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"59dc9b6f995b1a58ca9462c6b8f16090e1eed37fd67efe185a9750e039d2012c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6ea2a5a98c7784a7ad7702acc262ab5efb1fb3b94ee224698a6ebf4024bc79d9","pid":7319,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ea2a5a98c7784a7ad7702acc262ab5efb1fb3b94ee224698a6ebf4024bc79d9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ea2a5a98c7784a7ad7702acc262ab5efb1fb3b94ee224698a6ebf4024bc79d9/rootfs","created":"2021-08-16T22:42:53.206035797Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e40105c5a7146568f75a2427e1b2e99fb1b9de02d027483cf5917e9baf277fa8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7237baa217ae7218864322d6893eb52cef6f4f80ba58f4e4a0683b2bb5861a85","pid":6453,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7237baa217ae721
8864322d6893eb52cef6f4f80ba58f4e4a0683b2bb5861a85","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7237baa217ae7218864322d6893eb52cef6f4f80ba58f4e4a0683b2bb5861a85/rootfs","created":"2021-08-16T22:42:23.092003412Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3a4ddc9391f6bbf393f2f69a914b5280c8cf2430956fe3b1d73869edb2579212"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b","pid":7404,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b/rootfs","created":"2021-08-16T22:42:53.953742775Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubern
etes.cri.sandbox-id":"2a86821519acf60a4fa28241931865c536eb6fd06d8b333b1c0cdb3d9e4b2472"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"c56f64c3fe77c258bef4bbf05b666c6e0f0b07c60a576cf8f489af832a850b70","pid":6975,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c56f64c3fe77c258bef4bbf05b666c6e0f0b07c60a576cf8f489af832a850b70","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/c56f64c3fe77c258bef4bbf05b666c6e0f0b07c60a576cf8f489af832a850b70/rootfs","created":"2021-08-16T22:42:50.918927055Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3a1b01ca39e0d7fb0b2bde627216df6d80705cf380bb968986bbab4dc7c9c402"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d44da39af525283e271fe5e2c1d6b6ca8c6c5c4fbe472dc8018a26677fb22cfe","pid":6388,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d44da39af525283e271fe5e2c1d6b6ca8c6c5c4fbe472dc8018a26677fb22cfe","
rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d44da39af525283e271fe5e2c1d6b6ca8c6c5c4fbe472dc8018a26677fb22cfe/rootfs","created":"2021-08-16T22:42:22.870472464Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"d44da39af525283e271fe5e2c1d6b6ca8c6c5c4fbe472dc8018a26677fb22cfe","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-old-k8s-version-20210816223154-6986_b42c3e5fa8e81a5a78a3a372f8953126"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d8a20faccc88683a91a38b04fce625884ff0902e3655da6b1799f8bd0bc2805d","pid":6373,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8a20faccc88683a91a38b04fce625884ff0902e3655da6b1799f8bd0bc2805d","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d8a20faccc88683a91a38b04fce625884ff0902e3655da6b1799f8bd0bc2805d/rootfs","created":"2021-08-16T22:42:22.878109147Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernet
es.cri.sandbox-id":"d8a20faccc88683a91a38b04fce625884ff0902e3655da6b1799f8bd0bc2805d","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-old-k8s-version-20210816223154-6986_ba371a1cc55ef6aa89a1ba4554611582"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e30d5daacc87f690bc273d129f168487c51648dd2e6bc8d0870a0848215e02e9","pid":7782,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e30d5daacc87f690bc273d129f168487c51648dd2e6bc8d0870a0848215e02e9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e30d5daacc87f690bc273d129f168487c51648dd2e6bc8d0870a0848215e02e9/rootfs","created":"2021-08-16T22:42:56.116873996Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"12379a58cddde3b039f7bc0fceb518fa6e3feb0fb30d42e7fd083c4613b93da2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e40105c5a7146568f75a2427e1b2e99fb1b9de02d027483cf5917e9baf277fa8
","pid":7257,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e40105c5a7146568f75a2427e1b2e99fb1b9de02d027483cf5917e9baf277fa8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e40105c5a7146568f75a2427e1b2e99fb1b9de02d027483cf5917e9baf277fa8/rootfs","created":"2021-08-16T22:42:52.759803807Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e40105c5a7146568f75a2427e1b2e99fb1b9de02d027483cf5917e9baf277fa8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-fb8b8dccf-r87qj_48ebe67d-fee3-11eb-bea8-525400bf2371"},"owner":"root"}]
	I0816 22:43:50.487965   20101 cri.go:113] list returned 18 containers
	I0816 22:43:50.487983   20101 cri.go:116] container: {ID:12379a58cddde3b039f7bc0fceb518fa6e3feb0fb30d42e7fd083c4613b93da2 Status:running}
	I0816 22:43:50.487996   20101 cri.go:118] skipping 12379a58cddde3b039f7bc0fceb518fa6e3feb0fb30d42e7fd083c4613b93da2 - not in ps
	I0816 22:43:50.488002   20101 cri.go:116] container: {ID:1a95abd0c58fec2e599b7fce2863703efdb6bdba4e9f4ead191033cb41d2b50d Status:paused}
	I0816 22:43:50.488016   20101 cri.go:122] skipping {1a95abd0c58fec2e599b7fce2863703efdb6bdba4e9f4ead191033cb41d2b50d paused}: state = "paused", want "running"
	I0816 22:43:50.488032   20101 cri.go:116] container: {ID:1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d Status:paused}
	I0816 22:43:50.488039   20101 cri.go:122] skipping {1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d paused}: state = "paused", want "running"
	I0816 22:43:50.488048   20101 cri.go:116] container: {ID:2a86821519acf60a4fa28241931865c536eb6fd06d8b333b1c0cdb3d9e4b2472 Status:running}
	I0816 22:43:50.488055   20101 cri.go:118] skipping 2a86821519acf60a4fa28241931865c536eb6fd06d8b333b1c0cdb3d9e4b2472 - not in ps
	I0816 22:43:50.488063   20101 cri.go:116] container: {ID:2f7b780c4307b02ef87b4146fa91f3e11360d6db0cbc012d1cc73f4b3bf9f70c Status:running}
	I0816 22:43:50.488069   20101 cri.go:118] skipping 2f7b780c4307b02ef87b4146fa91f3e11360d6db0cbc012d1cc73f4b3bf9f70c - not in ps
	I0816 22:43:50.488074   20101 cri.go:116] container: {ID:3a1b01ca39e0d7fb0b2bde627216df6d80705cf380bb968986bbab4dc7c9c402 Status:running}
	I0816 22:43:50.488080   20101 cri.go:118] skipping 3a1b01ca39e0d7fb0b2bde627216df6d80705cf380bb968986bbab4dc7c9c402 - not in ps
	I0816 22:43:50.488085   20101 cri.go:116] container: {ID:3a4ddc9391f6bbf393f2f69a914b5280c8cf2430956fe3b1d73869edb2579212 Status:running}
	I0816 22:43:50.488091   20101 cri.go:118] skipping 3a4ddc9391f6bbf393f2f69a914b5280c8cf2430956fe3b1d73869edb2579212 - not in ps
	I0816 22:43:50.488096   20101 cri.go:116] container: {ID:40db6b8d20e4bd736d3b67a9f90c1c18618ce7e7920a19a8dbc53971d69eacb5 Status:running}
	I0816 22:43:50.488101   20101 cri.go:118] skipping 40db6b8d20e4bd736d3b67a9f90c1c18618ce7e7920a19a8dbc53971d69eacb5 - not in ps
	I0816 22:43:50.488106   20101 cri.go:116] container: {ID:59dc9b6f995b1a58ca9462c6b8f16090e1eed37fd67efe185a9750e039d2012c Status:running}
	I0816 22:43:50.488112   20101 cri.go:118] skipping 59dc9b6f995b1a58ca9462c6b8f16090e1eed37fd67efe185a9750e039d2012c - not in ps
	I0816 22:43:50.488117   20101 cri.go:116] container: {ID:5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36 Status:running}
	I0816 22:43:50.488123   20101 cri.go:116] container: {ID:6ea2a5a98c7784a7ad7702acc262ab5efb1fb3b94ee224698a6ebf4024bc79d9 Status:running}
	I0816 22:43:50.488129   20101 cri.go:116] container: {ID:7237baa217ae7218864322d6893eb52cef6f4f80ba58f4e4a0683b2bb5861a85 Status:running}
	I0816 22:43:50.488138   20101 cri.go:116] container: {ID:7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b Status:running}
	I0816 22:43:50.488143   20101 cri.go:116] container: {ID:c56f64c3fe77c258bef4bbf05b666c6e0f0b07c60a576cf8f489af832a850b70 Status:running}
	I0816 22:43:50.488151   20101 cri.go:116] container: {ID:d44da39af525283e271fe5e2c1d6b6ca8c6c5c4fbe472dc8018a26677fb22cfe Status:running}
	I0816 22:43:50.488158   20101 cri.go:118] skipping d44da39af525283e271fe5e2c1d6b6ca8c6c5c4fbe472dc8018a26677fb22cfe - not in ps
	I0816 22:43:50.488163   20101 cri.go:116] container: {ID:d8a20faccc88683a91a38b04fce625884ff0902e3655da6b1799f8bd0bc2805d Status:running}
	I0816 22:43:50.488175   20101 cri.go:118] skipping d8a20faccc88683a91a38b04fce625884ff0902e3655da6b1799f8bd0bc2805d - not in ps
	I0816 22:43:50.488180   20101 cri.go:116] container: {ID:e30d5daacc87f690bc273d129f168487c51648dd2e6bc8d0870a0848215e02e9 Status:running}
	I0816 22:43:50.488188   20101 cri.go:116] container: {ID:e40105c5a7146568f75a2427e1b2e99fb1b9de02d027483cf5917e9baf277fa8 Status:running}
	I0816 22:43:50.488195   20101 cri.go:118] skipping e40105c5a7146568f75a2427e1b2e99fb1b9de02d027483cf5917e9baf277fa8 - not in ps
	I0816 22:43:50.488245   20101 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36
	I0816 22:43:50.514178   20101 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36 6ea2a5a98c7784a7ad7702acc262ab5efb1fb3b94ee224698a6ebf4024bc79d9
	I0816 22:43:50.543126   20101 out.go:177] 
	W0816 22:43:50.543321   20101 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36 6ea2a5a98c7784a7ad7702acc262ab5efb1fb3b94ee224698a6ebf4024bc79d9: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:43:50Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36 6ea2a5a98c7784a7ad7702acc262ab5efb1fb3b94ee224698a6ebf4024bc79d9: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:43:50Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0816 22:43:50.543342   20101 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0816 22:43:50.546492   20101 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0816 22:43:50.548170   20101 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p old-k8s-version-20210816223154-6986 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210816223154-6986 -n old-k8s-version-20210816223154-6986

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210816223154-6986 -n old-k8s-version-20210816223154-6986: exit status 2 (14.467881842s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:44:05.021071   20129 status.go:422] Error apiserver status: https://192.168.94.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20210816223154-6986 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p old-k8s-version-20210816223154-6986 logs -n 25: exit status 110 (1m5.122528794s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                | disable-driver-mounts-20210816223418-6986      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:18 UTC | Mon, 16 Aug 2021 22:34:18 UTC |
	|         | disable-driver-mounts-20210816223418-6986         |                                                |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:31:54 UTC | Mon, 16 Aug 2021 22:34:34 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:44 UTC | Mon, 16 Aug 2021 22:34:45 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816223156-6986                 | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:31:56 UTC | Mon, 16 Aug 2021 22:35:04 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:33:33 UTC | Mon, 16 Aug 2021 22:35:08 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:16 UTC | Mon, 16 Aug 2021 22:35:17 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:19 UTC | Mon, 16 Aug 2021 22:35:20 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:18 UTC | Mon, 16 Aug 2021 22:35:42 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:51 UTC | Mon, 16 Aug 2021 22:35:52 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:45 UTC | Mon, 16 Aug 2021 22:36:18 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:18 UTC | Mon, 16 Aug 2021 22:36:18 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:17 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:20 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:52 UTC | Mon, 16 Aug 2021 22:37:25 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:37:25 UTC | Mon, 16 Aug 2021 22:37:25 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816223156-6986                 | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:42:31 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:42:42 UTC | Mon, 16 Aug 2021 22:42:42 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:18 UTC | Mon, 16 Aug 2021 22:43:37 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:43:44 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:43:47 UTC | Mon, 16 Aug 2021 22:43:47 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:37:25 UTC | Mon, 16 Aug 2021 22:43:52 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:43:55 UTC | Mon, 16 Aug 2021 22:43:55 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:44:03 UTC | Mon, 16 Aug 2021 22:44:03 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:37:25
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:37:25.306577   19204 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:37:25.306653   19204 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:37:25.306656   19204 out.go:311] Setting ErrFile to fd 2...
	I0816 22:37:25.306663   19204 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:37:25.307072   19204 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:37:25.307547   19204 out.go:305] Setting JSON to false
	I0816 22:37:25.351342   19204 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":4807,"bootTime":1629148638,"procs":188,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:37:25.351461   19204 start.go:121] virtualization: kvm guest
	I0816 22:37:25.353955   19204 out.go:177] * [default-k8s-different-port-20210816223418-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:37:25.355393   19204 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:37:25.354127   19204 notify.go:169] Checking for updates...
	I0816 22:37:25.356781   19204 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:37:25.358158   19204 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:37:25.364678   19204 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:37:25.365267   19204 config.go:177] Loaded profile config "default-k8s-different-port-20210816223418-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:37:25.365899   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:37:25.365956   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:37:25.381650   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46493
	I0816 22:37:25.382065   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:37:25.382798   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:37:25.382820   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:37:25.383330   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:37:25.383519   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:25.383721   19204 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:37:25.384192   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:37:25.384260   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:37:25.401082   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44899
	I0816 22:37:25.402507   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:37:25.403115   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:37:25.403179   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:37:25.403663   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:37:25.403903   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:25.439751   19204 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 22:37:25.439781   19204 start.go:278] selected driver: kvm2
	I0816 22:37:25.439788   19204 start.go:751] validating driver "kvm2" against &{Name:default-k8s-different-port-20210816223418-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210816223418-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kube
let:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:37:25.439905   19204 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 22:37:25.441282   19204 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:37:25.441453   19204 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 22:37:25.455762   19204 install.go:137] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2 version is 1.22.0
	I0816 22:37:25.456183   19204 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 22:37:25.456219   19204 cni.go:93] Creating CNI manager for ""
	I0816 22:37:25.456234   19204 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:37:25.456245   19204 start_flags.go:277] config:
	{Name:default-k8s-different-port-20210816223418-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-2021
0816223418-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop
:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:37:25.456384   19204 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:37:25.458420   19204 out.go:177] * Starting control plane node default-k8s-different-port-20210816223418-6986 in cluster default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.458447   19204 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:37:25.458480   19204 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0816 22:37:25.458495   19204 cache.go:56] Caching tarball of preloaded images
	I0816 22:37:25.458602   19204 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 22:37:25.458622   19204 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0816 22:37:25.458779   19204 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/config.json ...
	I0816 22:37:25.459003   19204 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:37:25.459033   19204 start.go:313] acquiring machines lock for default-k8s-different-port-20210816223418-6986: {Name:mk808edd60d1305a42bb85791729eff4573dbb15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 22:37:25.459101   19204 start.go:317] acquired machines lock for "default-k8s-different-port-20210816223418-6986" in 48.071µs
	I0816 22:37:25.459123   19204 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:37:25.459131   19204 fix.go:55] fixHost starting: 
	I0816 22:37:25.459569   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:37:25.459614   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:37:25.473634   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44349
	I0816 22:37:25.474153   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:37:25.474765   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:37:25.474786   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:37:25.475205   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:37:25.475409   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:25.475621   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetState
	I0816 22:37:25.479447   19204 fix.go:108] recreateIfNeeded on default-k8s-different-port-20210816223418-6986: state=Stopped err=<nil>
	I0816 22:37:25.479498   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	W0816 22:37:25.479660   19204 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:37:21.322104   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:21.822129   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:22.321669   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:22.821492   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:23.322452   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:23.822419   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:24.322141   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:24.821615   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:24.856062   18923 api_server.go:70] duration metric: took 8.045517198s to wait for apiserver process to appear ...
	I0816 22:37:24.856091   18923 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:37:24.856103   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:24.856734   18923 api_server.go:255] stopped: https://192.168.116.66:8443/healthz: Get "https://192.168.116.66:8443/healthz": dial tcp 192.168.116.66:8443: connect: connection refused
	I0816 22:37:25.357442   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:22.382628   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Getting to WaitForSSH function...
	I0816 22:37:22.388062   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:22.388472   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:22.388501   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:22.388736   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Using SSH client type: external
	I0816 22:37:22.388774   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa (-rw-------)
	I0816 22:37:22.388825   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.105.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:37:22.388851   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | About to run SSH command:
	I0816 22:37:22.388868   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | exit 0
	I0816 22:37:23.527862   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | SSH cmd err, output: <nil>: 
	I0816 22:37:23.528297   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetConfigRaw
	I0816 22:37:23.529175   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetIP
	I0816 22:37:23.535445   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.535831   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.535862   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.536325   18929 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/config.json ...
	I0816 22:37:23.536603   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:23.536838   18929 machine.go:88] provisioning docker machine ...
	I0816 22:37:23.536860   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:23.537120   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetMachineName
	I0816 22:37:23.537298   18929 buildroot.go:166] provisioning hostname "embed-certs-20210816223333-6986"
	I0816 22:37:23.537328   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetMachineName
	I0816 22:37:23.537497   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:23.543084   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.543520   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.543560   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.543770   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:23.543953   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:23.544122   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:23.544284   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:23.544470   18929 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:23.544676   18929 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.105.129 22 <nil> <nil>}
	I0816 22:37:23.544698   18929 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20210816223333-6986 && echo "embed-certs-20210816223333-6986" | sudo tee /etc/hostname
	I0816 22:37:23.682935   18929 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20210816223333-6986
	
	I0816 22:37:23.682982   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:23.689555   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.690034   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.690071   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.690297   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:23.690526   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:23.690738   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:23.690910   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:23.691116   18929 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:23.691321   18929 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.105.129 22 <nil> <nil>}
	I0816 22:37:23.691351   18929 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20210816223333-6986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210816223333-6986/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20210816223333-6986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:37:23.826330   18929 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:37:23.826357   18929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:37:23.826393   18929 buildroot.go:174] setting up certificates
	I0816 22:37:23.826403   18929 provision.go:83] configureAuth start
	I0816 22:37:23.826415   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetMachineName
	I0816 22:37:23.826673   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetIP
	I0816 22:37:23.832833   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.833221   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.833252   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.833505   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:23.839058   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.839437   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.839468   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.839721   18929 provision.go:138] copyHostCerts
	I0816 22:37:23.839785   18929 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:37:23.839801   18929 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:37:23.839858   18929 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:37:23.840010   18929 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:37:23.840023   18929 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:37:23.840050   18929 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:37:23.840148   18929 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:37:23.840160   18929 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:37:23.840181   18929 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
	I0816 22:37:23.840251   18929 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20210816223333-6986 san=[192.168.105.129 192.168.105.129 localhost 127.0.0.1 minikube embed-certs-20210816223333-6986]
	I0816 22:37:24.071276   18929 provision.go:172] copyRemoteCerts
	I0816 22:37:24.071347   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:37:24.071383   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.077584   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.078065   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.078133   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.078307   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.078500   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.078636   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.078743   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:37:24.168996   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:37:24.190581   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0816 22:37:24.211894   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:37:24.234970   18929 provision.go:86] duration metric: configureAuth took 408.533613ms
	I0816 22:37:24.235001   18929 buildroot.go:189] setting minikube options for container-runtime
	I0816 22:37:24.235282   18929 config.go:177] Loaded profile config "embed-certs-20210816223333-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:37:24.235303   18929 machine.go:91] provisioned docker machine in 698.450664ms
	I0816 22:37:24.235313   18929 start.go:267] post-start starting for "embed-certs-20210816223333-6986" (driver="kvm2")
	I0816 22:37:24.235321   18929 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:37:24.235352   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.235711   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:37:24.235748   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.242219   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.242647   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.242677   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.242968   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.243197   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.243376   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.243542   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:37:24.342244   18929 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:37:24.348430   18929 info.go:137] Remote host: Buildroot 2020.02.12
	I0816 22:37:24.348458   18929 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:37:24.348527   18929 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:37:24.348678   18929 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0816 22:37:24.348794   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:37:24.358370   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:37:24.378832   18929 start.go:270] post-start completed in 143.493882ms
	I0816 22:37:24.378891   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.379183   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.385172   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.385565   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.385596   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.385720   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.385936   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.386069   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.386238   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.386404   18929 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:24.386604   18929 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.105.129 22 <nil> <nil>}
	I0816 22:37:24.386621   18929 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0816 22:37:24.513150   18929 main.go:130] libmachine: SSH cmd err, output: <nil>: 1629153444.435910196
	
	I0816 22:37:24.513175   18929 fix.go:212] guest clock: 1629153444.435910196
	I0816 22:37:24.513185   18929 fix.go:225] Guest: 2021-08-16 22:37:24.435910196 +0000 UTC Remote: 2021-08-16 22:37:24.379164096 +0000 UTC m=+28.470229855 (delta=56.7461ms)
	I0816 22:37:24.513209   18929 fix.go:196] guest clock delta is within tolerance: 56.7461ms
	I0816 22:37:24.513220   18929 fix.go:57] fixHost completed within 14.813246061s
	I0816 22:37:24.513226   18929 start.go:80] releasing machines lock for "embed-certs-20210816223333-6986", held for 14.813280431s
	I0816 22:37:24.513267   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.513532   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetIP
	I0816 22:37:24.519703   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.520118   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.520149   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.520319   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.520528   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.521062   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.521300   18929 ssh_runner.go:149] Run: systemctl --version
	I0816 22:37:24.521326   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.521364   18929 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:37:24.521406   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.527844   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.527923   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.528257   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.528281   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.528308   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.528323   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.528556   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.528678   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.528724   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.528933   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.528943   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.529108   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.529179   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:37:24.529267   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:37:24.634682   18929 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:37:24.634891   18929 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:37:24.131199   18635 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:37:24.131267   18635 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:37:24.140028   18635 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:37:24.157600   18635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:37:24.171359   18635 system_pods.go:59] 8 kube-system pods found
	I0816 22:37:24.171398   18635 system_pods.go:61] "coredns-fb8b8dccf-qwcrg" [fd98f945-fee1-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171407   18635 system_pods.go:61] "etcd-old-k8s-version-20210816223154-6986" [1d77612e-fee2-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171414   18635 system_pods.go:61] "kube-apiserver-old-k8s-version-20210816223154-6986" [152107a2-fee2-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171420   18635 system_pods.go:61] "kube-controller-manager-old-k8s-version-20210816223154-6986" [8620a0da-fee2-11eb-b5b6-525400bf2371] Pending
	I0816 22:37:24.171426   18635 system_pods.go:61] "kube-proxy-nvb2s" [fdaa2b42-fee1-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171438   18635 system_pods.go:61] "kube-scheduler-old-k8s-version-20210816223154-6986" [1b1505e6-fee2-11eb-bb5b-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:37:24.171454   18635 system_pods.go:61] "metrics-server-8546d8b77b-gl6jr" [28801d4e-fee2-11eb-bb5b-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:37:24.171462   18635 system_pods.go:61] "storage-provisioner" [ff1e11f1-fee1-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171469   18635 system_pods.go:74] duration metric: took 13.840978ms to wait for pod list to return data ...
	I0816 22:37:24.171481   18635 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:37:24.176303   18635 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:37:24.176347   18635 node_conditions.go:123] node cpu capacity is 2
	I0816 22:37:24.176360   18635 node_conditions.go:105] duration metric: took 4.872863ms to run NodePressure ...
	I0816 22:37:24.176376   18635 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:25.292041   18635 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.115642082s)
	I0816 22:37:25.292077   18635 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:37:25.325547   18635 kubeadm.go:746] kubelet initialised
	I0816 22:37:25.325574   18635 kubeadm.go:747] duration metric: took 33.485813ms waiting for restarted kubelet to initialise ...
	I0816 22:37:25.325590   18635 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:37:25.351142   18635 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:27.387702   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:25.482074   19204 out.go:177] * Restarting existing kvm2 VM for "default-k8s-different-port-20210816223418-6986" ...
	I0816 22:37:25.482104   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Start
	I0816 22:37:25.482316   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Ensuring networks are active...
	I0816 22:37:25.484598   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Ensuring network default is active
	I0816 22:37:25.485014   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Ensuring network mk-default-k8s-different-port-20210816223418-6986 is active
	I0816 22:37:25.485452   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Getting domain xml...
	I0816 22:37:25.487765   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Creating domain...
	I0816 22:37:25.923048   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Waiting to get IP...
	I0816 22:37:25.924065   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.924660   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Found IP for machine: 192.168.50.186
	I0816 22:37:25.924682   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Reserving static IP address...
	I0816 22:37:25.924701   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has current primary IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.925155   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "default-k8s-different-port-20210816223418-6986", mac: "52:54:00:ed:d4:05", ip: "192.168.50.186"} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:34:34 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:25.925187   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | skip adding static IP to network mk-default-k8s-different-port-20210816223418-6986 - found existing host DHCP lease matching {name: "default-k8s-different-port-20210816223418-6986", mac: "52:54:00:ed:d4:05", ip: "192.168.50.186"}
	I0816 22:37:25.925202   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Reserved static IP address: 192.168.50.186
	I0816 22:37:25.925219   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Waiting for SSH to be available...
	I0816 22:37:25.925234   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Getting to WaitForSSH function...
	I0816 22:37:25.930369   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.930660   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:34:34 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:25.930705   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.930802   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Using SSH client type: external
	I0816 22:37:25.930842   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa (-rw-------)
	I0816 22:37:25.930888   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:37:25.931010   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | About to run SSH command:
	I0816 22:37:25.931033   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | exit 0
	I0816 22:37:30.356304   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:37:30.356337   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:37:30.357361   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:30.544479   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:30.544514   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:30.857809   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:30.866881   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:30.866920   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:28.652395   18929 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.017437883s)
	I0816 22:37:28.652577   18929 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0816 22:37:28.652647   18929 ssh_runner.go:149] Run: which lz4
	I0816 22:37:28.657345   18929 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0816 22:37:28.662555   18929 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0816 22:37:28.662584   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (928970367 bytes)
	I0816 22:37:31.357641   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:31.385946   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:31.385974   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:31.857651   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:31.878038   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:31.878070   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:32.357730   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:32.371926   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:32.371954   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:32.857204   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:32.867865   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 200:
	ok
	I0816 22:37:32.881085   18923 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:37:32.881113   18923 api_server.go:129] duration metric: took 8.025015474s to wait for apiserver health ...
	I0816 22:37:32.881124   18923 cni.go:93] Creating CNI manager for ""
	I0816 22:37:32.881132   18923 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:37:29.389763   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:31.391442   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:35.155848   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | SSH cmd err, output: exit status 255: 
	I0816 22:37:35.155882   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0816 22:37:35.155896   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | command : exit 0
	I0816 22:37:35.155905   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | err     : exit status 255
	I0816 22:37:35.155918   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | output  : 
	I0816 22:37:32.883184   18923 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:37:32.883268   18923 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:37:32.927942   18923 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:37:33.011939   18923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:37:33.043009   18923 system_pods.go:59] 8 kube-system pods found
	I0816 22:37:33.043056   18923 system_pods.go:61] "coredns-78fcd69978-nzf79" [a95afe1c-4f93-44a8-b669-b42c72f3500d] Running
	I0816 22:37:33.043064   18923 system_pods.go:61] "etcd-no-preload-20210816223156-6986" [fc40f0e0-16ef-4ba8-b5fd-17f4684d3a13] Running
	I0816 22:37:33.043076   18923 system_pods.go:61] "kube-apiserver-no-preload-20210816223156-6986" [f13df2c8-5aa8-49c3-89c0-b584ff8c62c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 22:37:33.043083   18923 system_pods.go:61] "kube-controller-manager-no-preload-20210816223156-6986" [8b866a1c-d283-4410-acbf-be2dbaa0f025] Running
	I0816 22:37:33.043094   18923 system_pods.go:61] "kube-proxy-64m6s" [fc5086fe-a671-4078-b76c-0c8f0656dca6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:37:33.043108   18923 system_pods.go:61] "kube-scheduler-no-preload-20210816223156-6986" [5db4c302-251a-47dc-90b9-424206ed445d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:37:33.043123   18923 system_pods.go:61] "metrics-server-7c784ccb57-44llk" [319102e5-661e-43bc-9c07-07463f6b1e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:37:33.043129   18923 system_pods.go:61] "storage-provisioner" [3da85640-a722-4ba1-a886-926bcaf81b8e] Running
	I0816 22:37:33.043140   18923 system_pods.go:74] duration metric: took 31.176037ms to wait for pod list to return data ...
	I0816 22:37:33.043149   18923 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:37:33.049500   18923 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:37:33.049531   18923 node_conditions.go:123] node cpu capacity is 2
	I0816 22:37:33.049544   18923 node_conditions.go:105] duration metric: took 6.385759ms to run NodePressure ...
	I0816 22:37:33.049562   18923 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:33.993434   18923 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:37:34.012191   18923 kubeadm.go:746] kubelet initialised
	I0816 22:37:34.012215   18923 kubeadm.go:747] duration metric: took 18.75429ms waiting for restarted kubelet to initialise ...
	I0816 22:37:34.012224   18923 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:37:34.033224   18923 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-nzf79" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:34.059145   18923 pod_ready.go:92] pod "coredns-78fcd69978-nzf79" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:34.059169   18923 pod_ready.go:81] duration metric: took 25.912051ms waiting for pod "coredns-78fcd69978-nzf79" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:34.059183   18923 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:32.660993   18929 containerd.go:546] Took 4.003687 seconds to copy over tarball
	I0816 22:37:32.661054   18929 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 22:37:33.892216   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:36.388385   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:38.156062   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Getting to WaitForSSH function...
	I0816 22:37:38.161988   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:38.162321   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:34:34 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:38.162379   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:38.162468   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Using SSH client type: external
	I0816 22:37:38.162499   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa (-rw-------)
	I0816 22:37:38.162538   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:37:38.162552   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | About to run SSH command:
	I0816 22:37:38.162570   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | exit 0
	I0816 22:37:36.102180   18923 pod_ready.go:102] pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:38.889153   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:41.402823   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | SSH cmd err, output: <nil>: 
	I0816 22:37:41.403283   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetConfigRaw
	I0816 22:37:41.404010   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetIP
	I0816 22:37:41.410017   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.410394   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.410432   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.410693   19204 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/config.json ...
	I0816 22:37:41.410926   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:41.411142   19204 machine.go:88] provisioning docker machine ...
	I0816 22:37:41.411167   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:41.411335   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetMachineName
	I0816 22:37:41.411477   19204 buildroot.go:166] provisioning hostname "default-k8s-different-port-20210816223418-6986"
	I0816 22:37:41.411499   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetMachineName
	I0816 22:37:41.411620   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:41.416760   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.417121   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.417154   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.417291   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:41.417487   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:41.417620   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:41.417769   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:41.417933   19204 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:41.418151   19204 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 22:37:41.418167   19204 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20210816223418-6986 && echo "default-k8s-different-port-20210816223418-6986" | sudo tee /etc/hostname
	I0816 22:37:41.560416   19204 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20210816223418-6986
	
	I0816 22:37:41.560449   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:41.566690   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.567028   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.567064   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.567351   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:41.567542   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:41.567703   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:41.567827   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:41.567996   19204 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:41.568193   19204 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 22:37:41.568221   19204 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20210816223418-6986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20210816223418-6986/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20210816223418-6986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:37:41.743484   19204 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:37:41.743518   19204 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:37:41.743559   19204 buildroot.go:174] setting up certificates
	I0816 22:37:41.743576   19204 provision.go:83] configureAuth start
	I0816 22:37:41.743593   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetMachineName
	I0816 22:37:41.743895   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetIP
	I0816 22:37:41.750014   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.750423   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.750467   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.750809   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:41.756158   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.756536   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.756569   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.756717   19204 provision.go:138] copyHostCerts
	I0816 22:37:41.756789   19204 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:37:41.756799   19204 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:37:41.756862   19204 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:37:41.756962   19204 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:37:41.756972   19204 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:37:41.756994   19204 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:37:41.757071   19204 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:37:41.757082   19204 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:37:41.757102   19204 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
	I0816 22:37:41.757156   19204 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20210816223418-6986 san=[192.168.50.186 192.168.50.186 localhost 127.0.0.1 minikube default-k8s-different-port-20210816223418-6986]
	I0816 22:37:42.356131   19204 provision.go:172] copyRemoteCerts
	I0816 22:37:42.356205   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:37:42.356250   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.362214   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.362513   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.362547   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.362780   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.362992   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.363219   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.363363   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:37:42.482862   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:37:42.512838   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0816 22:37:42.540047   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:37:42.568047   19204 provision.go:86] duration metric: configureAuth took 824.454088ms
	I0816 22:37:42.568077   19204 buildroot.go:189] setting minikube options for container-runtime
	I0816 22:37:42.568300   19204 config.go:177] Loaded profile config "default-k8s-different-port-20210816223418-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:37:42.568315   19204 machine.go:91] provisioned docker machine in 1.157156536s
	I0816 22:37:42.568324   19204 start.go:267] post-start starting for "default-k8s-different-port-20210816223418-6986" (driver="kvm2")
	I0816 22:37:42.568333   19204 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:37:42.568368   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.568715   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:37:42.568749   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.574488   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.574891   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.574928   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.575140   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.575339   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.575523   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.575710   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:37:42.676578   19204 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:37:42.682148   19204 info.go:137] Remote host: Buildroot 2020.02.12
	I0816 22:37:42.682181   19204 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:37:42.682247   19204 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:37:42.682409   19204 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0816 22:37:42.682558   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:37:42.691519   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:37:42.711453   19204 start.go:270] post-start completed in 143.110809ms
	I0816 22:37:42.711496   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.711732   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.718125   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.718511   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.718547   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.718832   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.719063   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.719246   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.719404   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.719588   19204 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:42.719762   19204 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 22:37:42.719775   19204 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0816 22:37:42.864591   19204 main.go:130] libmachine: SSH cmd err, output: <nil>: 1629153462.785763979
	
	I0816 22:37:42.864617   19204 fix.go:212] guest clock: 1629153462.785763979
	I0816 22:37:42.864627   19204 fix.go:225] Guest: 2021-08-16 22:37:42.785763979 +0000 UTC Remote: 2021-08-16 22:37:42.711713193 +0000 UTC m=+17.455762277 (delta=74.050786ms)
	I0816 22:37:42.864651   19204 fix.go:196] guest clock delta is within tolerance: 74.050786ms
	I0816 22:37:42.864660   19204 fix.go:57] fixHost completed within 17.405528602s
	I0816 22:37:42.864666   19204 start.go:80] releasing machines lock for "default-k8s-different-port-20210816223418-6986", held for 17.405551891s
	I0816 22:37:42.864711   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.864961   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetIP
	I0816 22:37:42.871077   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.871460   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.871504   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.871781   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.871990   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.872747   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.873035   19204 ssh_runner.go:149] Run: systemctl --version
	I0816 22:37:42.873067   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.873387   19204 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:37:42.873431   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.881178   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.881737   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.882041   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.882067   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.882095   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.882114   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.882432   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.882476   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.882624   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.882654   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.882754   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.882821   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.882852   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:37:42.882932   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:37:42.983824   19204 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:37:42.983945   19204 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:37:41.792417   18923 pod_ready.go:102] pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:42.110388   18923 pod_ready.go:92] pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:42.110425   18923 pod_ready.go:81] duration metric: took 8.051231395s waiting for pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.110443   18923 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.128769   18923 pod_ready.go:92] pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:42.128789   18923 pod_ready.go:81] duration metric: took 18.337432ms waiting for pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.128804   18923 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.137520   18923 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:42.137541   18923 pod_ready.go:81] duration metric: took 8.728281ms waiting for pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.137554   18923 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-64m6s" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.158798   18923 pod_ready.go:92] pod "kube-proxy-64m6s" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:42.158877   18923 pod_ready.go:81] duration metric: took 21.313805ms waiting for pod "kube-proxy-64m6s" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.158908   18923 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:44.512973   18923 pod_ready.go:102] pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:45.697026   18923 pod_ready.go:92] pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:45.697054   18923 pod_ready.go:81] duration metric: took 3.538123235s waiting for pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:45.697067   18923 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:44.369712   18929 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (11.708626678s)
	I0816 22:37:44.369752   18929 containerd.go:553] Took 11.708733 seconds t extract the tarball
	I0816 22:37:44.369766   18929 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0816 22:37:44.433232   18929 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:37:44.586357   18929 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:37:44.635654   18929 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0816 22:37:44.682553   18929 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0816 22:37:44.697822   18929 docker.go:153] disabling docker service ...
	I0816 22:37:44.697882   18929 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:37:44.709238   18929 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:37:44.720469   18929 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:37:44.857666   18929 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:37:44.991672   18929 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:37:45.005773   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:37:45.020903   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0816 22:37:45.035818   18929 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:37:45.045388   18929 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:37:45.045444   18929 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:37:45.065836   18929 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:37:45.073649   18929 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:37:45.210250   18929 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:37:45.536389   18929 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0816 22:37:45.536468   18929 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:37:45.543940   18929 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0816 22:37:46.648822   18929 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:37:46.654589   18929 start.go:413] Will wait 60s for crictl version
	I0816 22:37:46.654654   18929 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:37:46.687975   18929 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0816 22:37:46.688041   18929 ssh_runner.go:149] Run: containerd --version
	I0816 22:37:46.717960   18929 ssh_runner.go:149] Run: containerd --version
	I0816 22:37:43.671220   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:45.887022   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:47.896514   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:46.994449   19204 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.010481954s)
	I0816 22:37:46.994588   19204 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0816 22:37:46.994677   19204 ssh_runner.go:149] Run: which lz4
	I0816 22:37:46.999431   19204 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0816 22:37:47.004309   19204 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0816 22:37:47.004338   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (928970367 bytes)
	I0816 22:37:47.723452   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:49.727582   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:46.750218   18929 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0816 22:37:46.750266   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetIP
	I0816 22:37:46.755631   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:46.756018   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:46.756051   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:46.756195   18929 ssh_runner.go:149] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0816 22:37:46.760434   18929 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:37:46.770865   18929 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:37:46.770913   18929 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:37:46.804122   18929 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:37:46.804147   18929 containerd.go:517] Images already preloaded, skipping extraction
	I0816 22:37:46.804200   18929 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:37:46.836132   18929 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:37:46.836154   18929 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:37:46.836213   18929 ssh_runner.go:149] Run: sudo crictl info
	I0816 22:37:46.870224   18929 cni.go:93] Creating CNI manager for ""
	I0816 22:37:46.870256   18929 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:37:46.870269   18929 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:37:46.870282   18929 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.129 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20210816223333-6986 NodeName:embed-certs-20210816223333-6986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.105.129 CgroupDriver:cgroupf
s ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:37:46.870401   18929 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20210816223333-6986"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:37:46.870482   18929 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20210816223333-6986 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.105.129 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210816223333-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:37:46.870540   18929 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0816 22:37:46.878703   18929 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:37:46.878775   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:37:46.887763   18929 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (548 bytes)
	I0816 22:37:46.900548   18929 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 22:37:46.911899   18929 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2090 bytes)
	I0816 22:37:46.925412   18929 ssh_runner.go:149] Run: grep 192.168.105.129	control-plane.minikube.internal$ /etc/hosts
	I0816 22:37:46.929442   18929 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:37:46.939989   18929 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986 for IP: 192.168.105.129
	I0816 22:37:46.940054   18929 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:37:46.940073   18929 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:37:46.940143   18929 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/client.key
	I0816 22:37:46.940182   18929 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/apiserver.key.ff3abd74
	I0816 22:37:46.940203   18929 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/proxy-client.key
	I0816 22:37:46.940311   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem (1338 bytes)
	W0816 22:37:46.940364   18929 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
	I0816 22:37:46.940374   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 22:37:46.940398   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:37:46.940419   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:37:46.940453   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1675 bytes)
	I0816 22:37:46.940501   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:37:46.941607   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:37:46.959921   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 22:37:46.977073   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:37:46.995032   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 22:37:47.016388   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:37:47.036886   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 22:37:47.056736   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:37:47.076945   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 22:37:47.096512   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
	I0816 22:37:47.117888   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
	I0816 22:37:47.137952   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:37:47.159313   18929 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:37:47.173334   18929 ssh_runner.go:149] Run: openssl version
	I0816 22:37:47.179650   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:37:47.191486   18929 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:37:47.196524   18929 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:42 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:37:47.196589   18929 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:37:47.204162   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:37:47.214626   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
	I0816 22:37:47.226391   18929 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6986.pem
	I0816 22:37:47.234494   18929 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:49 /usr/share/ca-certificates/6986.pem
	I0816 22:37:47.234558   18929 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
	I0816 22:37:47.242705   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
	I0816 22:37:47.253305   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
	I0816 22:37:47.263502   18929 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/69862.pem
	I0816 22:37:47.268803   18929 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:49 /usr/share/ca-certificates/69862.pem
	I0816 22:37:47.268865   18929 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
	I0816 22:37:47.274964   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:37:47.283354   18929 kubeadm.go:390] StartCluster: {Name:embed-certs-20210816223333-6986 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3
ClusterName:embed-certs-20210816223333-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.129 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] Start
HostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:37:47.283503   18929 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0816 22:37:47.283565   18929 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:37:47.325446   18929 cri.go:76] found id: ""
	I0816 22:37:47.325557   18929 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:37:47.335659   18929 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:37:47.335682   18929 kubeadm.go:600] restartCluster start
	I0816 22:37:47.335733   18929 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:37:47.346292   18929 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:47.347565   18929 kubeconfig.go:117] verify returned: extract IP: "embed-certs-20210816223333-6986" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:37:47.348014   18929 kubeconfig.go:128] "embed-certs-20210816223333-6986" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:37:47.348788   18929 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:37:47.351634   18929 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:37:47.361663   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:47.361718   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:47.374579   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:47.574973   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:47.575059   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:47.589172   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:47.775434   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:47.775507   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:47.788957   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:47.975270   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:47.975360   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:47.989460   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.175680   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.175758   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.191429   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.375697   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.375790   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.386436   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.574665   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.574762   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.589082   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.775443   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.775512   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.791358   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.975634   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.975720   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.988259   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.175437   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.175544   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.190342   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.375596   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.375683   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.389601   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.574808   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.574892   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.585369   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.775000   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.775066   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.787982   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.975134   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.975231   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.986392   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:50.175658   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:50.175750   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:50.188143   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:50.375418   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:50.375514   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:50.387182   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:50.387201   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:50.387249   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:50.397435   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:50.397461   18929 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:37:50.397471   18929 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:37:50.397485   18929 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:37:50.397549   18929 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:37:50.439348   18929 cri.go:76] found id: ""
	I0816 22:37:50.439419   18929 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:37:50.459652   18929 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:37:50.469766   18929 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:37:50.469836   18929 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:37:50.479399   18929 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:37:50.479422   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:50.872420   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:50.387080   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:52.388399   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:53.358602   19204 containerd.go:546] Took 6.359210 seconds to copy over tarball
	I0816 22:37:53.358725   19204 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 22:37:51.735229   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:54.223000   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:52.412541   18929 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.540081052s)
	I0816 22:37:52.412575   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:52.718154   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:52.886875   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:53.025017   18929 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:37:53.025085   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:53.540988   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:54.040437   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:54.541392   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:55.040418   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:55.540381   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:54.887899   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:56.229434   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:58.302035   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:00.733041   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:56.040801   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:56.540669   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:57.040354   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:57.540386   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:58.040333   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:58.540400   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:59.040772   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:59.540444   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:00.041274   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:00.540645   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:00.741760   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:02.887487   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:03.393238   19204 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (10.034485098s)
	I0816 22:38:03.393270   19204 containerd.go:553] Took 10.034612 seconds t extract the tarball
	I0816 22:38:03.393282   19204 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0816 22:38:03.459021   19204 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:38:03.599477   19204 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:38:03.656046   19204 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0816 22:38:03.843112   19204 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0816 22:38:03.858574   19204 docker.go:153] disabling docker service ...
	I0816 22:38:03.858632   19204 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:38:03.872784   19204 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:38:03.886816   19204 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:38:04.029472   19204 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:38:04.164998   19204 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:38:04.176395   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:38:04.190579   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0816 22:38:04.204338   19204 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:38:04.211355   19204 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:38:04.211415   19204 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:38:04.229181   19204 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:38:04.236487   19204 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:38:04.368079   19204 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:38:03.226580   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:05.846484   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:01.040586   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:01.541229   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:02.041014   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:02.540773   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:03.040804   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:03.540654   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:04.041158   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:04.540403   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:05.041212   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:05.540477   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:05.871071   19204 ssh_runner.go:189] Completed: sudo systemctl restart containerd: (1.502953255s)
	I0816 22:38:05.871107   19204 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0816 22:38:05.871162   19204 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:38:05.876672   19204 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0816 22:38:06.981936   19204 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:38:06.987477   19204 start.go:413] Will wait 60s for crictl version
	I0816 22:38:06.987542   19204 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:38:07.019404   19204 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0816 22:38:07.019460   19204 ssh_runner.go:149] Run: containerd --version
	I0816 22:38:07.056241   19204 ssh_runner.go:149] Run: containerd --version
	I0816 22:38:05.841456   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:07.888564   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:07.088137   19204 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0816 22:38:07.088183   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetIP
	I0816 22:38:07.093462   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:38:07.093796   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:38:07.093832   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:38:07.093973   19204 ssh_runner.go:149] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 22:38:07.098921   19204 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:38:07.109221   19204 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:38:07.109293   19204 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:38:07.143575   19204 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:38:07.143601   19204 containerd.go:517] Images already preloaded, skipping extraction
	I0816 22:38:07.143659   19204 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:38:07.174105   19204 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:38:07.174129   19204 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:38:07.174182   19204 ssh_runner.go:149] Run: sudo crictl info
	I0816 22:38:07.212980   19204 cni.go:93] Creating CNI manager for ""
	I0816 22:38:07.213012   19204 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:38:07.213028   19204 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:38:07.213043   19204 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.186 APIServerPort:8444 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20210816223418-6986 NodeName:default-k8s-different-port-20210816223418-6986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.
50.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:38:07.213191   19204 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.186
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20210816223418-6986"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:38:07.213279   19204 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20210816223418-6986 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.186 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210816223418-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0816 22:38:07.213332   19204 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0816 22:38:07.222054   19204 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:38:07.222139   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:38:07.230063   19204 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (562 bytes)
	I0816 22:38:07.244461   19204 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 22:38:07.259892   19204 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0816 22:38:07.274883   19204 ssh_runner.go:149] Run: grep 192.168.50.186	control-plane.minikube.internal$ /etc/hosts
	I0816 22:38:07.280261   19204 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:38:07.293265   19204 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986 for IP: 192.168.50.186
	I0816 22:38:07.293314   19204 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:38:07.293333   19204 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:38:07.293384   19204 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/client.key
	I0816 22:38:07.293423   19204 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/apiserver.key.c5cc0a12
	I0816 22:38:07.293458   19204 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/proxy-client.key
	I0816 22:38:07.293569   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem (1338 bytes)
	W0816 22:38:07.293608   19204 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
	I0816 22:38:07.293618   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 22:38:07.293643   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:38:07.293668   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:38:07.293692   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1675 bytes)
	I0816 22:38:07.293738   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:38:07.294686   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:38:07.314730   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 22:38:07.332358   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:38:07.351920   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 22:38:07.369849   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:38:07.388099   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 22:38:07.406297   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:38:07.425998   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 22:38:07.443687   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:38:07.460832   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
	I0816 22:38:07.481210   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
	I0816 22:38:07.501717   19204 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:38:07.514903   19204 ssh_runner.go:149] Run: openssl version
	I0816 22:38:07.520949   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:38:07.531264   19204 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:38:07.536846   19204 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:42 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:38:07.536898   19204 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:38:07.543551   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:38:07.553322   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
	I0816 22:38:07.563414   19204 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6986.pem
	I0816 22:38:07.568579   19204 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:49 /usr/share/ca-certificates/6986.pem
	I0816 22:38:07.568631   19204 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
	I0816 22:38:07.574828   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
	I0816 22:38:07.582849   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
	I0816 22:38:07.591254   19204 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/69862.pem
	I0816 22:38:07.595981   19204 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:49 /usr/share/ca-certificates/69862.pem
	I0816 22:38:07.596044   19204 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
	I0816 22:38:07.602206   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:38:07.611191   19204 kubeadm.go:390] StartCluster: {Name:default-k8s-different-port-20210816223418-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.21.3 ClusterName:default-k8s-different-port-20210816223418-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_read
y:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:38:07.611272   19204 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0816 22:38:07.611319   19204 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:38:07.643146   19204 cri.go:76] found id: ""
	I0816 22:38:07.643226   19204 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:38:07.650886   19204 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:38:07.650919   19204 kubeadm.go:600] restartCluster start
	I0816 22:38:07.650971   19204 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:38:07.658653   19204 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:07.659605   19204 kubeconfig.go:117] verify returned: extract IP: "default-k8s-different-port-20210816223418-6986" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:38:07.660046   19204 kubeconfig.go:128] "default-k8s-different-port-20210816223418-6986" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:38:07.661820   19204 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:38:07.664797   19204 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:38:07.672378   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:07.672416   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:07.682197   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:07.882615   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:07.882689   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:07.893628   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.082995   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.083063   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.092764   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.283037   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.283112   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.293325   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.482586   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.482681   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.493502   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.682844   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.682915   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.693201   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.882416   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.882491   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.892118   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.082359   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.082457   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.092165   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.282385   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.282459   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.291528   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.482860   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.482930   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.493037   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.682335   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.682408   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.691945   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.883133   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.883193   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.892794   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.083140   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.083233   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.092308   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.223670   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:10.742112   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:06.041308   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:06.540690   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:07.041155   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:07.540839   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:08.040793   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:08.541292   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:09.041388   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:09.540943   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:10.041377   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:10.541237   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:10.386476   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:12.889815   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:10.282796   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.282889   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.292190   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.482261   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.482330   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.491729   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.683104   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.683186   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.693060   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.693079   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.693121   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.701893   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.701916   19204 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:38:10.701925   19204 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:38:10.701938   19204 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:38:10.701989   19204 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:38:10.740433   19204 cri.go:76] found id: ""
	I0816 22:38:10.740501   19204 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:38:10.756485   19204 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:38:10.765450   19204 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:38:10.765507   19204 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:38:10.772477   19204 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:38:10.772499   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:11.017384   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:12.671111   19204 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.653686174s)
	I0816 22:38:12.671155   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:12.947393   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:13.086256   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:13.215447   19204 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:38:13.215508   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:13.731105   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.231119   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.731093   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:15.231319   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:13.224797   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:15.723341   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:11.040800   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:11.540697   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:12.040673   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:12.541181   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:13.041152   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:13.541025   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.041183   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.541230   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.551768   18929 api_server.go:70] duration metric: took 21.526753133s to wait for apiserver process to appear ...
	I0816 22:38:14.551790   18929 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:38:14.551800   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:38:15.386344   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:16.395588   18635 pod_ready.go:92] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.395621   18635 pod_ready.go:81] duration metric: took 51.044447203s waiting for pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.395634   18635 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.408068   18635 pod_ready.go:92] pod "etcd-old-k8s-version-20210816223154-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.408086   18635 pod_ready.go:81] duration metric: took 12.443476ms waiting for pod "etcd-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.408096   18635 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.414488   18635 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20210816223154-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.414507   18635 pod_ready.go:81] duration metric: took 6.402316ms waiting for pod "kube-apiserver-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.414521   18635 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.420281   18635 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20210816223154-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.420300   18635 pod_ready.go:81] duration metric: took 5.769412ms waiting for pod "kube-controller-manager-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.420313   18635 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nvb2s" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.425411   18635 pod_ready.go:92] pod "kube-proxy-nvb2s" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.425430   18635 pod_ready.go:81] duration metric: took 5.109715ms waiting for pod "kube-proxy-nvb2s" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.425440   18635 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.784339   18635 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20210816223154-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.784360   18635 pod_ready.go:81] duration metric: took 358.911908ms waiting for pod "kube-scheduler-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.784371   18635 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:18.553150   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:38:18.553194   18929 api_server.go:101] status: https://192.168.105.129:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:38:19.053887   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:38:19.071151   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:38:19.071179   18929 api_server.go:101] status: https://192.168.105.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:38:19.553619   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:38:19.561382   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:38:19.561406   18929 api_server.go:101] status: https://192.168.105.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:38:20.053341   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:38:20.061527   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 200:
	ok
	I0816 22:38:20.069537   18929 api_server.go:139] control plane version: v1.21.3
	I0816 22:38:20.069560   18929 api_server.go:129] duration metric: took 5.517764917s to wait for apiserver health ...
	I0816 22:38:20.069572   18929 cni.go:93] Creating CNI manager for ""
	I0816 22:38:20.069581   18929 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:38:15.731207   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:16.231247   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:16.731268   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:17.230730   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:17.730956   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:18.231458   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:18.730950   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:19.230879   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:19.730819   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:20.230563   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:20.243853   19204 api_server.go:70] duration metric: took 7.028407985s to wait for apiserver process to appear ...
	I0816 22:38:20.243876   19204 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:38:20.243887   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:18.225200   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:20.243220   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:20.071659   18929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:38:20.071738   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:38:20.084719   18929 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:38:20.113939   18929 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:38:20.132494   18929 system_pods.go:59] 8 kube-system pods found
	I0816 22:38:20.132598   18929 system_pods.go:61] "coredns-558bd4d5db-jq6bb" [c088e8ae-638c-449f-b206-10b016f707f4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:38:20.132622   18929 system_pods.go:61] "etcd-embed-certs-20210816223333-6986" [350ff095-f45d-4c87-a10a-cbb9a0cc4358] Running
	I0816 22:38:20.132654   18929 system_pods.go:61] "kube-apiserver-embed-certs-20210816223333-6986" [7ee444e9-f198-4d9b-985e-b190a2e5e369] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 22:38:20.132667   18929 system_pods.go:61] "kube-controller-manager-embed-certs-20210816223333-6986" [c71ecc69-d617-48d3-a162-46d27aedd0a9] Running
	I0816 22:38:20.132676   18929 system_pods.go:61] "kube-proxy-8h6xz" [7cbdd516-13c5-469b-8e60-7dc0babb699a] Running
	I0816 22:38:20.132688   18929 system_pods.go:61] "kube-scheduler-embed-certs-20210816223333-6986" [4ebf165e-13c3-4f42-a75f-4301ea2f6c78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:38:20.132698   18929 system_pods.go:61] "metrics-server-7c784ccb57-9xpsr" [6b6283cf-0668-48a4-9f21-61cb5723f0b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:38:20.132704   18929 system_pods.go:61] "storage-provisioner" [7893460e-43c2-4606-8b56-c2ed9ac764bd] Running
	I0816 22:38:20.132712   18929 system_pods.go:74] duration metric: took 18.749758ms to wait for pod list to return data ...
	I0816 22:38:20.132721   18929 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:38:20.138564   18929 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:38:20.138614   18929 node_conditions.go:123] node cpu capacity is 2
	I0816 22:38:20.138632   18929 node_conditions.go:105] duration metric: took 5.904026ms to run NodePressure ...
	I0816 22:38:20.138651   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:20.830223   18929 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:38:20.835364   18929 kubeadm.go:746] kubelet initialised
	I0816 22:38:20.835384   18929 kubeadm.go:747] duration metric: took 5.139864ms waiting for restarted kubelet to initialise ...
	I0816 22:38:20.835392   18929 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:38:20.841354   18929 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:19.191797   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:21.192936   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:25.244953   19204 api_server.go:255] stopped: https://192.168.50.186:8444/healthz: Get "https://192.168.50.186:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 22:38:22.723414   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:25.223163   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:22.860677   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:24.863916   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:23.690499   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:25.690995   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:27.691820   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:25.746028   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:27.721976   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:29.722107   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:27.361030   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:29.361218   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:30.190894   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:32.192100   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:30.746969   19204 api_server.go:255] stopped: https://192.168.50.186:8444/healthz: Get "https://192.168.50.186:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 22:38:31.245148   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:32.224115   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:34.723153   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:31.859919   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:33.863770   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:34.691552   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:36.693980   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:36.246218   19204 api_server.go:255] stopped: https://192.168.50.186:8444/healthz: Get "https://192.168.50.186:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 22:38:36.745853   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:37.223369   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:39.239225   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:36.360668   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:38.361218   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:40.871372   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:41.344967   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:38:41.344991   19204 api_server.go:101] status: https://192.168.50.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:38:41.745061   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:41.754168   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:38:41.754195   19204 api_server.go:101] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:38:42.245898   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:42.258458   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:38:42.258509   19204 api_server.go:101] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:38:42.745610   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:42.756658   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 200:
	ok
	I0816 22:38:42.770293   19204 api_server.go:139] control plane version: v1.21.3
	I0816 22:38:42.770321   19204 api_server.go:129] duration metric: took 22.526438535s to wait for apiserver health ...
	I0816 22:38:42.770332   19204 cni.go:93] Creating CNI manager for ""
	I0816 22:38:42.770339   19204 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:38:39.192176   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:41.198006   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:42.772377   19204 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:38:42.772434   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:38:42.788298   19204 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:38:42.809709   19204 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:38:42.824805   19204 system_pods.go:59] 8 kube-system pods found
	I0816 22:38:42.824843   19204 system_pods.go:61] "coredns-558bd4d5db-ssfkf" [eb30728b-0eae-41d8-90bc-d8de8c6b4caa] Running
	I0816 22:38:42.824857   19204 system_pods.go:61] "etcd-default-k8s-different-port-20210816223418-6986" [825a27d4-d8dc-4dbe-a724-ac2e59508c5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 22:38:42.824865   19204 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210816223418-6986" [a3383733-5a20-4b5a-aeab-df3e61e37d94] Running
	I0816 22:38:42.824882   19204 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210816223418-6986" [42f433b1-271b-41a6-96a0-ab85fe6ba28e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 22:38:42.824896   19204 system_pods.go:61] "kube-proxy-psg4t" [98ca6629-d521-445d-99c2-b7e7ddf3b973] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:38:42.824905   19204 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210816223418-6986" [bef50322-5dc7-4680-b867-e17eb23298a8] Running
	I0816 22:38:42.824919   19204 system_pods.go:61] "metrics-server-7c784ccb57-rmrr6" [325f4892-3ae2-4a08-bc13-22c74c15c362] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:38:42.824929   19204 system_pods.go:61] "storage-provisioner" [89aadc6c-b5b0-47eb-b6e0-0f5fb78b1689] Running
	I0816 22:38:42.824936   19204 system_pods.go:74] duration metric: took 15.209253ms to wait for pod list to return data ...
	I0816 22:38:42.824947   19204 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:38:42.835095   19204 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:38:42.835144   19204 node_conditions.go:123] node cpu capacity is 2
	I0816 22:38:42.835160   19204 node_conditions.go:105] duration metric: took 10.206913ms to run NodePressure ...
	I0816 22:38:42.835178   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:43.431532   19204 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:38:43.443469   19204 kubeadm.go:746] kubelet initialised
	I0816 22:38:43.443543   19204 kubeadm.go:747] duration metric: took 11.973692ms waiting for restarted kubelet to initialise ...
	I0816 22:38:43.443567   19204 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:38:43.467119   19204 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-ssfkf" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:43.487197   19204 pod_ready.go:92] pod "coredns-558bd4d5db-ssfkf" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:43.487224   19204 pod_ready.go:81] duration metric: took 20.062907ms waiting for pod "coredns-558bd4d5db-ssfkf" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:43.487236   19204 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:41.723036   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:43.727234   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:42.883394   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:45.360217   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:43.692394   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:46.195001   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:45.513670   19204 pod_ready.go:102] pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:47.520170   19204 pod_ready.go:102] pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:50.012608   19204 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:50.012643   19204 pod_ready.go:81] duration metric: took 6.525398312s waiting for pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:50.012653   19204 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:50.018616   19204 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:50.018632   19204 pod_ready.go:81] duration metric: took 5.971078ms waiting for pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:50.018641   19204 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:46.223793   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:48.231527   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:50.721902   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:47.864929   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:50.359955   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:48.690708   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:50.691511   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:53.191133   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:52.030327   19204 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:54.530276   19204 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:52.723113   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:54.730785   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:52.865142   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:55.362902   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:55.692797   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:58.193231   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:56.537583   19204 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:59.032998   19204 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:59.531144   19204 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:59.531179   19204 pod_ready.go:81] duration metric: took 9.512530001s waiting for pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:59.531194   19204 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-psg4t" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:57.227423   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:59.722421   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:57.860847   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:00.383065   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:00.194401   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:02.693032   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:01.045104   19204 pod_ready.go:92] pod "kube-proxy-psg4t" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.045136   19204 pod_ready.go:81] duration metric: took 1.513934389s waiting for pod "kube-proxy-psg4t" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.045162   19204 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:03.065559   19204 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:02.225371   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:04.231432   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:01.360648   18929 pod_ready.go:92] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.360679   18929 pod_ready.go:81] duration metric: took 40.519291305s waiting for pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.360692   18929 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.377816   18929 pod_ready.go:92] pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.377835   18929 pod_ready.go:81] duration metric: took 17.135128ms waiting for pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.377844   18929 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.384900   18929 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.384919   18929 pod_ready.go:81] duration metric: took 7.067915ms waiting for pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.384928   18929 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.391593   18929 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.391615   18929 pod_ready.go:81] duration metric: took 6.679953ms waiting for pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.391628   18929 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8h6xz" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.397839   18929 pod_ready.go:92] pod "kube-proxy-8h6xz" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.397859   18929 pod_ready.go:81] duration metric: took 6.224125ms waiting for pod "kube-proxy-8h6xz" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.397870   18929 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.757203   18929 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.757231   18929 pod_ready.go:81] duration metric: took 359.352415ms waiting for pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.757245   18929 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:04.166965   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:05.190883   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:07.691413   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:05.560049   19204 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:07.563106   19204 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:10.058732   19204 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:06.241105   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:08.721067   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:10.729982   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:06.173818   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:08.671197   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:10.190249   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:12.190937   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:11.058551   19204 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:11.058589   19204 pod_ready.go:81] duration metric: took 10.013415785s waiting for pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:11.058602   19204 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:13.079741   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:13.222923   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:15.223480   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:11.169568   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:13.668888   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:15.675907   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:14.691328   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:17.193097   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:15.574185   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:18.080714   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:17.721688   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:20.223136   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:18.166872   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:20.167888   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:19.690743   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:21.695097   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:20.573176   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:22.575373   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:25.080599   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:22.721982   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:24.723334   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:22.674385   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:25.168465   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:24.191127   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:26.692188   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:27.576508   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:30.077538   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:26.725975   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:29.222550   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:27.667108   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:29.672819   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:29.190076   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:31.191096   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:32.573255   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:34.574846   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:31.222778   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:33.721695   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:35.722989   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:32.167222   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:34.168925   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:33.691602   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:35.693194   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:38.192247   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:36.575818   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:39.074280   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:37.724177   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:40.222061   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:36.667227   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:38.667709   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:40.193105   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:42.691214   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:41.577819   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:44.074371   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:42.222318   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:44.223676   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:41.169382   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:43.169678   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:45.172140   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:44.692521   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:46.693152   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:46.080520   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:48.574175   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:46.226822   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:48.723407   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:50.723464   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:47.669324   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:50.168305   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:49.191566   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:51.192223   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:50.574493   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:53.072736   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:55.075288   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:53.226025   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:55.722244   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:52.667088   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:54.668826   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:53.690899   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:55.692317   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:58.190689   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:57.076942   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:59.573822   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:58.225641   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:00.721925   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:57.165321   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:59.171812   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:00.194014   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:02.691574   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:01.573901   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:04.073928   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:02.724585   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:04.724644   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:01.175154   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:03.669857   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:05.191832   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:07.693327   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:06.576903   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:09.078443   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:07.222275   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:09.224637   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:06.167190   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:08.168551   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:10.668660   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:10.191769   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:12.693193   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:11.574665   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:13.576508   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:11.224838   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:13.721159   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:15.727256   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:12.670244   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:15.167885   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:15.194325   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:17.692108   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:16.072818   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:18.078890   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:17.729812   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:20.226491   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:17.177047   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:19.217251   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:20.192280   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:22.693518   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:20.574552   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:22.574777   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:24.577476   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:22.727579   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:24.728352   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:21.668537   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:24.167106   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:25.191135   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:27.191723   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:27.075236   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:29.574554   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:27.223601   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:29.225348   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:26.172206   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:28.666902   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:30.667512   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:29.693817   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:32.192170   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:32.073947   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:34.076857   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:31.806875   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:34.222064   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:32.670097   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:35.167425   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:34.193574   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:36.692421   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:36.575233   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:39.074418   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:36.223456   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:38.224575   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:40.721673   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:37.168398   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:39.172793   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:39.196016   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:41.690324   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:41.075116   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:43.576123   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:42.724088   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:44.724675   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:41.674073   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:44.170704   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:43.693077   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:45.693362   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:48.190525   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:45.576264   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:48.077395   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:46.729980   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:49.221967   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:46.171454   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:48.665714   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:50.668334   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:50.193564   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:52.691234   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:50.572686   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:52.574382   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:55.074999   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:51.222668   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:53.226343   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:55.725259   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:52.673171   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:55.168585   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:54.692513   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:57.191126   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:57.079875   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:59.573017   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:58.221527   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:00.227502   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:57.671255   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:00.168665   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:59.691534   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:01.693478   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:01.582883   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:04.072426   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:02.722966   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:04.727296   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:02.173240   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:04.665480   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:04.191798   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:06.691447   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:06.073825   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:08.074664   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:10.075325   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:07.223517   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:09.721892   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:06.667330   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:08.671220   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:09.191192   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:11.691389   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:12.076107   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:14.575585   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:11.725914   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:13.730699   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:11.169385   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:13.673312   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:14.191060   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:16.192184   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:16.576492   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:19.076650   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:16.225569   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:18.724188   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:20.724698   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:16.165664   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:18.166105   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:20.166339   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:18.691871   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:20.691922   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:23.191074   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:21.574173   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:24.075930   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:23.223119   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:25.223978   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:22.173729   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:24.666435   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:25.692064   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:27.693165   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:26.574028   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:28.577627   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:27.723162   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:30.225428   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:26.666698   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:28.667290   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:30.669320   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:30.191236   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:32.194129   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:31.078550   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:33.574708   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:32.272795   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:34.721477   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:32.670349   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:35.166861   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:34.691270   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:36.693071   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:36.073462   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:38.075367   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:36.731674   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:39.226976   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:37.170645   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:39.724821   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:39.190190   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:41.192605   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:43.194313   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:40.572815   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:43.074323   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:41.728026   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:44.222098   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:45.713684   18923 pod_ready.go:81] duration metric: took 4m0.016600156s waiting for pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace to be "Ready" ...
	E0816 22:41:45.713707   18923 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:41:45.713739   18923 pod_ready.go:38] duration metric: took 4m11.701504099s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:41:45.713769   18923 kubeadm.go:604] restartCluster took 4m33.579475629s
	W0816 22:41:45.713944   18923 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:41:45.714027   18923 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0816 22:41:42.167746   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:44.671010   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:45.690207   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:47.696181   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:45.573577   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:47.577169   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:50.074120   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:49.532312   18923 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.817885262s)
	I0816 22:41:49.532396   18923 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:41:49.547377   18923 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:41:49.547460   18923 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:41:49.586205   18923 cri.go:76] found id: "c265ff52803b42d9b0ff6036d0ca9fe4b655c9875474baa9a0a3f7255d230e17"
	I0816 22:41:49.586231   18923 cri.go:76] found id: ""
	W0816 22:41:49.586237   18923 kubeadm.go:840] found 1 kube-system containers to stop
	I0816 22:41:49.586243   18923 cri.go:221] Stopping containers: [c265ff52803b42d9b0ff6036d0ca9fe4b655c9875474baa9a0a3f7255d230e17]
	I0816 22:41:49.586286   18923 ssh_runner.go:149] Run: which crictl
	I0816 22:41:49.590992   18923 ssh_runner.go:149] Run: sudo /bin/crictl stop c265ff52803b42d9b0ff6036d0ca9fe4b655c9875474baa9a0a3f7255d230e17
	I0816 22:41:49.626874   18923 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:41:49.635033   18923 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:41:49.643072   18923 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:41:49.643114   18923 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0816 22:41:46.671498   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:49.167852   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:50.191302   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:52.194912   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:52.573508   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:54.574289   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:51.170118   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:53.672114   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:54.691353   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:56.691660   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:57.075408   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:59.575201   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:56.166934   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:58.175241   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:00.668070   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:58.692572   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:00.693110   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:02.693563   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:02.073370   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:04.074072   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:03.171450   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:05.675018   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:05.192214   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:07.692700   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:08.829041   18923 out.go:204]   - Generating certificates and keys ...
	I0816 22:42:08.831708   18923 out.go:204]   - Booting up control plane ...
	I0816 22:42:08.834200   18923 out.go:204]   - Configuring RBAC rules ...
	I0816 22:42:08.836416   18923 cni.go:93] Creating CNI manager for ""
	I0816 22:42:08.836433   18923 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:42:06.578343   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:08.578554   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:08.838017   18923 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:42:08.838073   18923 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:42:08.846501   18923 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:42:08.869457   18923 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:42:08.869501   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:08.869527   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=no-preload-20210816223156-6986 minikube.k8s.io/updated_at=2021_08_16T22_42_08_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:09.240543   18923 ops.go:34] apiserver oom_adj: -16
	I0816 22:42:09.240662   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:09.839173   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:10.338906   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:10.839126   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:08.175656   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:10.670201   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:09.693093   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:12.193949   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:11.076847   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:13.572667   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:11.339623   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:11.839145   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:12.339335   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:12.839352   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:13.339016   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:13.838633   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:14.339209   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:14.839574   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:15.338605   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:15.838986   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:13.166828   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:15.170558   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:14.195434   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:16.691097   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:17.183312   18635 pod_ready.go:81] duration metric: took 4m0.398928004s waiting for pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace to be "Ready" ...
	E0816 22:42:17.183337   18635 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:42:17.183357   18635 pod_ready.go:38] duration metric: took 4m51.857756569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:42:17.183387   18635 kubeadm.go:604] restartCluster took 5m19.62322748s
	W0816 22:42:17.183554   18635 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:42:17.183589   18635 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0816 22:42:15.573445   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:17.576213   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:19.578780   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:16.339618   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:16.839112   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:17.338889   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:17.838606   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:18.339509   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:18.839537   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:19.338632   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:19.839240   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:20.339527   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:20.838664   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:17.671899   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:19.672963   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:20.586991   18635 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.403367986s)
	I0816 22:42:20.587083   18635 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:42:20.603414   18635 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:42:20.603499   18635 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:42:20.644469   18635 cri.go:76] found id: ""
	I0816 22:42:20.644547   18635 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:42:20.654179   18635 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:42:20.664747   18635 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:42:20.664790   18635 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
	I0816 22:42:21.326940   18635 out.go:204]   - Generating certificates and keys ...
	I0816 22:42:21.189008   18923 kubeadm.go:985] duration metric: took 12.319564991s to wait for elevateKubeSystemPrivileges.
	I0816 22:42:21.189042   18923 kubeadm.go:392] StartCluster complete in 5m9.132482632s
	I0816 22:42:21.189068   18923 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:42:21.189186   18923 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:42:21.191084   18923 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0816 22:42:21.253468   18923 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0816 22:42:22.263255   18923 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20210816223156-6986" rescaled to 1
	I0816 22:42:22.263323   18923 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.116.66 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0816 22:42:22.265111   18923 out.go:177] * Verifying Kubernetes components...
	I0816 22:42:22.265169   18923 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:42:22.263389   18923 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:42:22.263413   18923 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:42:22.265318   18923 addons.go:59] Setting storage-provisioner=true in profile "no-preload-20210816223156-6986"
	I0816 22:42:22.265343   18923 addons.go:135] Setting addon storage-provisioner=true in "no-preload-20210816223156-6986"
	I0816 22:42:22.265343   18923 addons.go:59] Setting dashboard=true in profile "no-preload-20210816223156-6986"
	W0816 22:42:22.265352   18923 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:42:22.265365   18923 addons.go:135] Setting addon dashboard=true in "no-preload-20210816223156-6986"
	W0816 22:42:22.265384   18923 addons.go:147] addon dashboard should already be in state true
	I0816 22:42:22.263563   18923 config.go:177] Loaded profile config "no-preload-20210816223156-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0816 22:42:22.265401   18923 addons.go:59] Setting metrics-server=true in profile "no-preload-20210816223156-6986"
	I0816 22:42:22.265412   18923 host.go:66] Checking if "no-preload-20210816223156-6986" exists ...
	I0816 22:42:22.265427   18923 addons.go:135] Setting addon metrics-server=true in "no-preload-20210816223156-6986"
	W0816 22:42:22.265437   18923 addons.go:147] addon metrics-server should already be in state true
	I0816 22:42:22.265384   18923 addons.go:59] Setting default-storageclass=true in profile "no-preload-20210816223156-6986"
	I0816 22:42:22.265462   18923 host.go:66] Checking if "no-preload-20210816223156-6986" exists ...
	I0816 22:42:22.265390   18923 host.go:66] Checking if "no-preload-20210816223156-6986" exists ...
	I0816 22:42:22.265461   18923 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20210816223156-6986"
	I0816 22:42:22.265940   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.265944   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.265957   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.265942   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.265975   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.265986   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.266089   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.266123   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.281969   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45777
	I0816 22:42:22.282708   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.282877   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40837
	I0816 22:42:22.283046   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42349
	I0816 22:42:22.283302   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.283322   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.283427   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.283650   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.283893   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.284078   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.284092   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.284330   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.284347   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.284461   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.284627   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.284665   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.284970   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.285003   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.285116   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.285285   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.293128   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38523
	I0816 22:42:22.293558   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.294059   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.294082   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.294429   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.294987   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.295053   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.298092   18923 addons.go:135] Setting addon default-storageclass=true in "no-preload-20210816223156-6986"
	W0816 22:42:22.298118   18923 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:42:22.298147   18923 host.go:66] Checking if "no-preload-20210816223156-6986" exists ...
	I0816 22:42:22.298560   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.298601   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.302416   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44833
	I0816 22:42:22.302994   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.303562   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.303593   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.304002   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.304209   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.305854   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34121
	I0816 22:42:22.306273   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.307236   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.307263   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.307631   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.307783   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.308340   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .DriverName
	I0816 22:42:22.310958   18923 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:42:22.311023   18923 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:42:22.311044   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:42:22.311064   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHHostname
	I0816 22:42:22.311377   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .DriverName
	I0816 22:42:22.313216   18923 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:42:22.311947   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45779
	I0816 22:42:22.313321   18923 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:42:22.313337   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:42:22.312981   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38669
	I0816 22:42:22.313354   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHHostname
	I0816 22:42:22.313674   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.313848   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.314124   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.314144   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.314391   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.314413   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.314493   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.314698   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.314875   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.315544   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.315591   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.319514   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .DriverName
	I0816 22:42:22.319736   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.321507   18923 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:42:22.320102   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:66:08", ip: ""} in network mk-no-preload-20210816223156-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:37:07 +0000 UTC Type:0 Mac:52:54:00:1e:66:08 Iaid: IPaddr:192.168.116.66 Prefix:24 Hostname:no-preload-20210816223156-6986 Clientid:01:52:54:00:1e:66:08}
	I0816 22:42:22.320309   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHPort
	I0816 22:42:22.320694   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.321331   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHPort
	I0816 22:42:22.321669   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:66:08", ip: ""} in network mk-no-preload-20210816223156-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:37:07 +0000 UTC Type:0 Mac:52:54:00:1e:66:08 Iaid: IPaddr:192.168.116.66 Prefix:24 Hostname:no-preload-20210816223156-6986 Clientid:01:52:54:00:1e:66:08}
	I0816 22:42:22.321594   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined IP address 192.168.116.66 and MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.180281   18635 out.go:204]   - Booting up control plane ...
	I0816 22:42:22.073806   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:24.079495   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:22.323189   18923 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:42:22.321708   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined IP address 192.168.116.66 and MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.321766   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHKeyPath
	I0816 22:42:22.321808   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHKeyPath
	I0816 22:42:22.323243   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:42:22.323341   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:42:22.323363   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHHostname
	I0816 22:42:22.323468   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHUsername
	I0816 22:42:22.323473   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHUsername
	I0816 22:42:22.323663   18923 sshutil.go:53] new ssh client: &{IP:192.168.116.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816223156-6986/id_rsa Username:docker}
	I0816 22:42:22.323678   18923 sshutil.go:53] new ssh client: &{IP:192.168.116.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816223156-6986/id_rsa Username:docker}
	I0816 22:42:22.328724   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45831
	I0816 22:42:22.329130   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.329535   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.329554   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.329851   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.329938   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.330124   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.330329   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:66:08", ip: ""} in network mk-no-preload-20210816223156-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:37:07 +0000 UTC Type:0 Mac:52:54:00:1e:66:08 Iaid: IPaddr:192.168.116.66 Prefix:24 Hostname:no-preload-20210816223156-6986 Clientid:01:52:54:00:1e:66:08}
	I0816 22:42:22.330363   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined IP address 192.168.116.66 and MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.330478   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHPort
	I0816 22:42:22.330620   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHKeyPath
	I0816 22:42:22.330750   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHUsername
	I0816 22:42:22.330873   18923 sshutil.go:53] new ssh client: &{IP:192.168.116.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816223156-6986/id_rsa Username:docker}
	I0816 22:42:22.333001   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .DriverName
	I0816 22:42:22.333246   18923 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:42:22.333262   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:42:22.333279   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHHostname
	I0816 22:42:22.338603   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.339024   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:66:08", ip: ""} in network mk-no-preload-20210816223156-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:37:07 +0000 UTC Type:0 Mac:52:54:00:1e:66:08 Iaid: IPaddr:192.168.116.66 Prefix:24 Hostname:no-preload-20210816223156-6986 Clientid:01:52:54:00:1e:66:08}
	I0816 22:42:22.339055   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined IP address 192.168.116.66 and MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.339242   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHPort
	I0816 22:42:22.339393   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHKeyPath
	I0816 22:42:22.339570   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHUsername
	I0816 22:42:22.339731   18923 sshutil.go:53] new ssh client: &{IP:192.168.116.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816223156-6986/id_rsa Username:docker}
	I0816 22:42:22.671302   18923 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:42:22.671331   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:42:22.674471   18923 node_ready.go:35] waiting up to 6m0s for node "no-preload-20210816223156-6986" to be "Ready" ...
	I0816 22:42:22.674764   18923 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.116.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:42:22.680985   18923 node_ready.go:49] node "no-preload-20210816223156-6986" has status "Ready":"True"
	I0816 22:42:22.681006   18923 node_ready.go:38] duration metric: took 6.219914ms waiting for node "no-preload-20210816223156-6986" to be "Ready" ...
	I0816 22:42:22.681017   18923 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:42:22.690584   18923 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:22.758871   18923 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:42:22.908102   18923 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:42:22.908132   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:42:23.011738   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:42:23.011768   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:42:23.048103   18923 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:42:23.113442   18923 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:42:23.113472   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:42:23.311431   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:42:23.311461   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:42:23.413450   18923 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:42:23.601523   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:42:23.601554   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:42:23.797882   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:42:23.797908   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:42:23.957080   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:42:23.957109   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:42:24.496102   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:42:24.496134   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:42:24.715720   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:42:24.715807   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:42:24.725833   18923 pod_ready.go:102] pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:24.991135   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:42:24.991165   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:42:25.061259   18923 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.116.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.386242884s)
	I0816 22:42:25.061297   18923 start.go:728] {"host.minikube.internal": 192.168.116.1} host record injected into CoreDNS
	I0816 22:42:25.085411   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:42:25.085463   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:42:25.132722   18923 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:42:25.402705   18923 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.64379015s)
	I0816 22:42:25.402772   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.402790   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.403123   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.403222   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:25.403245   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.403270   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.403197   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:25.403597   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:25.404574   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.404594   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:25.404607   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.404616   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.404837   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.404878   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:25.431424   18923 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.383276848s)
	I0816 22:42:25.431470   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.431484   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.431767   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.431781   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:25.431788   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:25.431799   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.431810   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.432092   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.432111   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:22.168138   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:24.174050   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:26.094382   18923 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.680878058s)
	I0816 22:42:26.094446   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:26.094474   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:26.094773   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:26.094830   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:26.094859   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:26.094885   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:26.094774   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:26.095167   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:26.095182   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:26.095193   18923 addons.go:313] Verifying addon metrics-server=true in "no-preload-20210816223156-6986"
	I0816 22:42:26.855647   18923 pod_ready.go:102] pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:27.149522   18923 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.016735128s)
	I0816 22:42:27.149590   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:27.149605   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:27.149955   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:27.150053   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:27.150073   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:27.150083   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:27.150094   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:27.150330   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:27.150347   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:26.575022   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:28.575534   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:27.153345   18923 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0816 22:42:27.153375   18923 addons.go:344] enableAddons completed in 4.88997344s
	I0816 22:42:28.729990   18923 pod_ready.go:92] pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:28.730033   18923 pod_ready.go:81] duration metric: took 6.039413295s waiting for pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:28.730047   18923 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-vf8l9" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.743600   18923 pod_ready.go:97] error getting pod "coredns-78fcd69978-vf8l9" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-vf8l9" not found
	I0816 22:42:30.743642   18923 pod_ready.go:81] duration metric: took 2.013586217s waiting for pod "coredns-78fcd69978-vf8l9" in "kube-system" namespace to be "Ready" ...
	E0816 22:42:30.743656   18923 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-78fcd69978-vf8l9" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-vf8l9" not found
	I0816 22:42:30.743666   18923 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.757721   18923 pod_ready.go:92] pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.757745   18923 pod_ready.go:81] duration metric: took 14.064042ms waiting for pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.757758   18923 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.767053   18923 pod_ready.go:92] pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.767087   18923 pod_ready.go:81] duration metric: took 9.317684ms waiting for pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.767102   18923 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.777595   18923 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.777619   18923 pod_ready.go:81] duration metric: took 10.507966ms waiting for pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.777632   18923 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jhqbx" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.790967   18923 pod_ready.go:92] pod "kube-proxy-jhqbx" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.790991   18923 pod_ready.go:81] duration metric: took 13.350231ms waiting for pod "kube-proxy-jhqbx" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.791003   18923 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:26.174733   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:28.675892   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:30.951607   18923 pod_ready.go:92] pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.951630   18923 pod_ready.go:81] duration metric: took 160.617881ms waiting for pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.951642   18923 pod_ready.go:38] duration metric: took 8.270610362s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:42:30.951663   18923 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:42:30.951723   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:42:30.970609   18923 api_server.go:70] duration metric: took 8.707242252s to wait for apiserver process to appear ...
	I0816 22:42:30.970637   18923 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:42:30.970650   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:42:30.979459   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 200:
	ok
	I0816 22:42:30.980742   18923 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:42:30.980766   18923 api_server.go:129] duration metric: took 10.122149ms to wait for apiserver health ...
	I0816 22:42:30.980777   18923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:42:31.156911   18923 system_pods.go:59] 8 kube-system pods found
	I0816 22:42:31.156942   18923 system_pods.go:61] "coredns-78fcd69978-9rlk6" [83d3b042-5692-4c44-b6f2-65020120666e] Running
	I0816 22:42:31.156949   18923 system_pods.go:61] "etcd-no-preload-20210816223156-6986" [4ea832d0-9ae2-4d9e-9aed-aa581e1ef4cf] Running
	I0816 22:42:31.156956   18923 system_pods.go:61] "kube-apiserver-no-preload-20210816223156-6986" [048331fe-fed3-4cca-a2c6-3b377e26647b] Running
	I0816 22:42:31.156965   18923 system_pods.go:61] "kube-controller-manager-no-preload-20210816223156-6986" [704424de-e6c8-4c70-bd80-c9b68c4c4b57] Running
	I0816 22:42:31.156971   18923 system_pods.go:61] "kube-proxy-jhqbx" [edacd358-46da-4db4-a8db-098f6edefb76] Running
	I0816 22:42:31.156977   18923 system_pods.go:61] "kube-scheduler-no-preload-20210816223156-6986" [7674933f-6de0-4090-867c-c082293d963c] Running
	I0816 22:42:31.156988   18923 system_pods.go:61] "metrics-server-7c784ccb57-dfjww" [7a744b20-6d7f-4001-a322-7e5615cbf15f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:42:31.156998   18923 system_pods.go:61] "storage-provisioner" [c8a5ee3a-f5df-4eeb-92b1-443ecef04ea2] Running
	I0816 22:42:31.157005   18923 system_pods.go:74] duration metric: took 176.222595ms to wait for pod list to return data ...
	I0816 22:42:31.157016   18923 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:42:31.345286   18923 default_sa.go:45] found service account: "default"
	I0816 22:42:31.345311   18923 default_sa.go:55] duration metric: took 188.289571ms for default service account to be created ...
	I0816 22:42:31.345319   18923 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:42:31.555450   18923 system_pods.go:86] 8 kube-system pods found
	I0816 22:42:31.555481   18923 system_pods.go:89] "coredns-78fcd69978-9rlk6" [83d3b042-5692-4c44-b6f2-65020120666e] Running
	I0816 22:42:31.555490   18923 system_pods.go:89] "etcd-no-preload-20210816223156-6986" [4ea832d0-9ae2-4d9e-9aed-aa581e1ef4cf] Running
	I0816 22:42:31.555497   18923 system_pods.go:89] "kube-apiserver-no-preload-20210816223156-6986" [048331fe-fed3-4cca-a2c6-3b377e26647b] Running
	I0816 22:42:31.555503   18923 system_pods.go:89] "kube-controller-manager-no-preload-20210816223156-6986" [704424de-e6c8-4c70-bd80-c9b68c4c4b57] Running
	I0816 22:42:31.555509   18923 system_pods.go:89] "kube-proxy-jhqbx" [edacd358-46da-4db4-a8db-098f6edefb76] Running
	I0816 22:42:31.555515   18923 system_pods.go:89] "kube-scheduler-no-preload-20210816223156-6986" [7674933f-6de0-4090-867c-c082293d963c] Running
	I0816 22:42:31.555529   18923 system_pods.go:89] "metrics-server-7c784ccb57-dfjww" [7a744b20-6d7f-4001-a322-7e5615cbf15f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:42:31.555541   18923 system_pods.go:89] "storage-provisioner" [c8a5ee3a-f5df-4eeb-92b1-443ecef04ea2] Running
	I0816 22:42:31.555553   18923 system_pods.go:126] duration metric: took 210.228822ms to wait for k8s-apps to be running ...
	I0816 22:42:31.555566   18923 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:42:31.555615   18923 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:42:31.581892   18923 system_svc.go:56] duration metric: took 26.318542ms WaitForService to wait for kubelet.
	I0816 22:42:31.581920   18923 kubeadm.go:547] duration metric: took 9.318562144s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:42:31.581949   18923 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:42:31.744656   18923 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:42:31.744683   18923 node_conditions.go:123] node cpu capacity is 2
	I0816 22:42:31.744699   18923 node_conditions.go:105] duration metric: took 162.745304ms to run NodePressure ...
	I0816 22:42:31.744708   18923 start.go:231] waiting for startup goroutines ...
	I0816 22:42:31.799332   18923 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0816 22:42:31.801873   18923 out.go:177] 
	W0816 22:42:31.802045   18923 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0816 22:42:31.803807   18923 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0816 22:42:31.805603   18923 out.go:177] * Done! kubectl is now configured to use "no-preload-20210816223156-6986" cluster and "default" namespace by default
	I0816 22:42:34.356504   18635 out.go:204]   - Configuring RBAC rules ...
	I0816 22:42:34.810198   18635 cni.go:93] Creating CNI manager for ""
	I0816 22:42:34.810227   18635 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:42:30.576523   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:33.074048   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:35.075110   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:31.178766   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:33.673945   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:35.674516   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:34.812149   18635 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:42:34.812218   18635 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:42:34.823097   18635 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:42:34.840052   18635 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:42:34.840175   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=old-k8s-version-20210816223154-6986 minikube.k8s.io/updated_at=2021_08_16T22_42_34_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:34.840179   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:35.279911   18635 ops.go:34] apiserver oom_adj: 16
	I0816 22:42:35.279930   18635 ops.go:39] adjusting apiserver oom_adj to -10
	I0816 22:42:35.279944   18635 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:42:35.279997   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:35.887807   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:36.388228   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:36.888072   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:37.388131   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:37.888197   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:37.075407   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:39.574205   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:38.169080   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:40.669388   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:38.388192   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:38.887529   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:39.387314   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:39.887397   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:40.388222   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:40.887817   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:41.388165   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:41.887336   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:42.387710   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:42.887452   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:41.575892   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:44.074399   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:43.168677   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:45.674667   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:43.388233   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:43.888191   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:44.388190   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:44.888073   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:45.387300   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:45.887633   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:46.388266   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:46.887918   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:47.387283   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:47.887770   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:46.074552   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:48.573015   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:48.387776   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:48.888189   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:49.388262   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:49.887594   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:50.137803   18635 kubeadm.go:985] duration metric: took 15.297678668s to wait for elevateKubeSystemPrivileges.
	I0816 22:42:50.137838   18635 kubeadm.go:392] StartCluster complete in 5m52.622280434s
	I0816 22:42:50.137865   18635 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:42:50.137996   18635 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:42:50.140032   18635 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:42:50.769953   18635 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20210816223154-6986" rescaled to 1
	I0816 22:42:50.770028   18635 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.94.246 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0816 22:42:50.771768   18635 out.go:177] * Verifying Kubernetes components...
	I0816 22:42:50.771833   18635 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:42:50.770075   18635 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:42:50.770097   18635 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:42:50.770295   18635 config.go:177] Loaded profile config "old-k8s-version-20210816223154-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0816 22:42:50.771981   18635 addons.go:59] Setting default-storageclass=true in profile "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.771981   18635 addons.go:59] Setting storage-provisioner=true in profile "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.771999   18635 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.772004   18635 addons.go:135] Setting addon storage-provisioner=true in "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.771995   18635 addons.go:59] Setting dashboard=true in profile "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.772027   18635 addons.go:135] Setting addon dashboard=true in "old-k8s-version-20210816223154-6986"
	W0816 22:42:50.772039   18635 addons.go:147] addon dashboard should already be in state true
	I0816 22:42:50.772074   18635 host.go:66] Checking if "old-k8s-version-20210816223154-6986" exists ...
	I0816 22:42:50.771981   18635 addons.go:59] Setting metrics-server=true in profile "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.772106   18635 addons.go:135] Setting addon metrics-server=true in "old-k8s-version-20210816223154-6986"
	W0816 22:42:50.772118   18635 addons.go:147] addon metrics-server should already be in state true
	I0816 22:42:50.772143   18635 host.go:66] Checking if "old-k8s-version-20210816223154-6986" exists ...
	W0816 22:42:50.772012   18635 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:42:50.772202   18635 host.go:66] Checking if "old-k8s-version-20210816223154-6986" exists ...
	I0816 22:42:50.772450   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.772491   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.772514   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.772550   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.772562   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.772590   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.772850   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.772907   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.786384   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43919
	I0816 22:42:50.786896   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.787436   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.787463   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.787854   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.788085   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.788330   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36229
	I0816 22:42:50.788749   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.789268   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.789290   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.789622   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.790176   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.790222   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.795830   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42399
	I0816 22:42:50.795865   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0816 22:42:50.796347   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.796355   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.796868   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.796888   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.796872   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.796936   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.797257   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.797329   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.797807   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.797848   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.797871   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.797906   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.799195   18635 addons.go:135] Setting addon default-storageclass=true in "old-k8s-version-20210816223154-6986"
	W0816 22:42:50.799218   18635 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:42:50.799243   18635 host.go:66] Checking if "old-k8s-version-20210816223154-6986" exists ...
	I0816 22:42:50.799640   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.799681   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.810531   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40999
	I0816 22:42:50.811204   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.811785   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.811802   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.812347   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.812540   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.815618   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44099
	I0816 22:42:50.815827   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34305
	I0816 22:42:50.816141   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.816227   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.816697   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.816714   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.816835   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.816854   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.817100   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.817172   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.817189   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .DriverName
	I0816 22:42:50.817352   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.819885   18635 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:42:50.817704   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.820954   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .DriverName
	I0816 22:42:50.821662   18635 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:42:50.821713   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.821719   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:42:50.821731   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:42:50.821750   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHHostname
	I0816 22:42:50.823437   18635 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:42:50.822272   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33579
	I0816 22:42:50.823493   18635 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:42:50.823505   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:42:50.823522   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHHostname
	I0816 22:42:50.823823   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.824293   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.824311   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.824702   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.824895   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.828911   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.828954   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .DriverName
	I0816 22:42:47.677798   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:50.171236   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:50.830871   18635 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:42:50.830990   18635 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:42:50.831003   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:42:50.831019   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHHostname
	I0816 22:42:50.829748   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:23:71", ip: ""} in network mk-old-k8s-version-20210816223154-6986: {Iface:virbr6 ExpiryTime:2021-08-16 23:36:29 +0000 UTC Type:0 Mac:52:54:00:bf:23:71 Iaid: IPaddr:192.168.94.246 Prefix:24 Hostname:old-k8s-version-20210816223154-6986 Clientid:01:52:54:00:bf:23:71}
	I0816 22:42:50.831084   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined IP address 192.168.94.246 and MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.829926   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHPort
	I0816 22:42:50.830586   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.831142   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:23:71", ip: ""} in network mk-old-k8s-version-20210816223154-6986: {Iface:virbr6 ExpiryTime:2021-08-16 23:36:29 +0000 UTC Type:0 Mac:52:54:00:bf:23:71 Iaid: IPaddr:192.168.94.246 Prefix:24 Hostname:old-k8s-version-20210816223154-6986 Clientid:01:52:54:00:bf:23:71}
	I0816 22:42:50.831171   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined IP address 192.168.94.246 and MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.831303   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHKeyPath
	I0816 22:42:50.831452   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHUsername
	I0816 22:42:50.831626   18635 sshutil.go:53] new ssh client: &{IP:192.168.94.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816223154-6986/id_rsa Username:docker}
	I0816 22:42:50.831935   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHPort
	I0816 22:42:50.832101   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHKeyPath
	I0816 22:42:50.832284   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHUsername
	I0816 22:42:50.832496   18635 sshutil.go:53] new ssh client: &{IP:192.168.94.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816223154-6986/id_rsa Username:docker}
	I0816 22:42:50.835565   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34581
	I0816 22:42:50.836045   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.836624   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.836646   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.836952   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.837022   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.837210   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.837385   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:23:71", ip: ""} in network mk-old-k8s-version-20210816223154-6986: {Iface:virbr6 ExpiryTime:2021-08-16 23:36:29 +0000 UTC Type:0 Mac:52:54:00:bf:23:71 Iaid: IPaddr:192.168.94.246 Prefix:24 Hostname:old-k8s-version-20210816223154-6986 Clientid:01:52:54:00:bf:23:71}
	I0816 22:42:50.837420   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined IP address 192.168.94.246 and MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.837596   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHPort
	I0816 22:42:50.837797   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHKeyPath
	I0816 22:42:50.837973   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHUsername
	I0816 22:42:50.838150   18635 sshutil.go:53] new ssh client: &{IP:192.168.94.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816223154-6986/id_rsa Username:docker}
	I0816 22:42:50.839968   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .DriverName
	I0816 22:42:50.840224   18635 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:42:50.840241   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:42:50.840256   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHHostname
	I0816 22:42:50.846248   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.846622   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:23:71", ip: ""} in network mk-old-k8s-version-20210816223154-6986: {Iface:virbr6 ExpiryTime:2021-08-16 23:36:29 +0000 UTC Type:0 Mac:52:54:00:bf:23:71 Iaid: IPaddr:192.168.94.246 Prefix:24 Hostname:old-k8s-version-20210816223154-6986 Clientid:01:52:54:00:bf:23:71}
	I0816 22:42:50.846648   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined IP address 192.168.94.246 and MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.846901   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHPort
	I0816 22:42:50.847072   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHKeyPath
	I0816 22:42:50.847256   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHUsername
	I0816 22:42:50.847384   18635 sshutil.go:53] new ssh client: &{IP:192.168.94.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816223154-6986/id_rsa Username:docker}
	I0816 22:42:51.069324   18635 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20210816223154-6986" to be "Ready" ...
	I0816 22:42:51.069363   18635 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:42:51.074198   18635 node_ready.go:49] node "old-k8s-version-20210816223154-6986" has status "Ready":"True"
	I0816 22:42:51.074219   18635 node_ready.go:38] duration metric: took 4.853226ms waiting for node "old-k8s-version-20210816223154-6986" to be "Ready" ...
	I0816 22:42:51.074228   18635 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:42:51.079427   18635 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:51.095977   18635 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:42:51.095994   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:42:51.114667   18635 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:42:51.127402   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:42:51.127423   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:42:51.139080   18635 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:42:51.142203   18635 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:42:51.142227   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:42:51.184024   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:42:51.184049   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:42:51.229690   18635 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:42:51.229719   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:42:51.258163   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:42:51.258186   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:42:51.292848   18635 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:42:51.348950   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:42:51.348979   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:42:51.432982   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:42:51.433017   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:42:51.500730   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:42:51.500762   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:42:51.566104   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:42:51.566132   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:42:51.669547   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:42:51.669569   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:42:51.755011   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:42:51.755042   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:42:51.807684   18635 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:42:52.571594   18635 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.502197835s)
	I0816 22:42:52.571636   18635 start.go:728] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0816 22:42:52.759651   18635 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.644944376s)
	I0816 22:42:52.759687   18635 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.620572399s)
	I0816 22:42:52.759727   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.759743   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.759751   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.759765   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.760012   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.760058   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:52.760071   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.760080   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.760115   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.760131   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:52.760156   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.760170   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.761684   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.761690   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | Closing plugin on server side
	I0816 22:42:52.761704   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:52.761719   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | Closing plugin on server side
	I0816 22:42:52.761689   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.761794   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:52.761806   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.761817   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.762085   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.762108   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:53.390381   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:53.699731   18635 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.406829667s)
	I0816 22:42:53.699820   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:53.699836   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:53.700202   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:53.700222   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:53.700238   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:53.700249   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:53.700503   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:53.700523   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:53.700538   18635 addons.go:313] Verifying addon metrics-server=true in "old-k8s-version-20210816223154-6986"
	I0816 22:42:54.131359   18635 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.323617191s)
	I0816 22:42:54.131419   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:54.131434   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:54.131720   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | Closing plugin on server side
	I0816 22:42:54.131759   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:54.131767   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:54.131782   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:54.131793   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:54.132029   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:54.132048   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:50.574063   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:53.075372   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:52.670047   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:54.673975   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:54.134079   18635 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0816 22:42:54.134104   18635 addons.go:344] enableAddons completed in 3.364015112s
	I0816 22:42:55.589126   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:57.594328   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:55.581048   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:58.075675   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:57.167077   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:59.670483   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:59.594568   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:02.093248   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:00.574293   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:02.574884   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:05.075277   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:02.159000   18929 pod_ready.go:81] duration metric: took 4m0.401738783s waiting for pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace to be "Ready" ...
	E0816 22:43:02.159021   18929 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:43:02.159049   18929 pod_ready.go:38] duration metric: took 4m41.323642164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:02.159079   18929 kubeadm.go:604] restartCluster took 5m14.823391905s
	W0816 22:43:02.159203   18929 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:43:02.159238   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0816 22:43:05.238090   18929 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.078818721s)
	I0816 22:43:05.238168   18929 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:43:05.256580   18929 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:43:05.256649   18929 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:43:05.300644   18929 cri.go:76] found id: ""
	I0816 22:43:05.300755   18929 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:43:05.308191   18929 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:43:05.315888   18929 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:43:05.315936   18929 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0816 22:43:05.885054   18929 out.go:204]   - Generating certificates and keys ...
	I0816 22:43:04.591211   18635 pod_ready.go:92] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:04.591250   18635 pod_ready.go:81] duration metric: took 13.511789308s waiting for pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:04.591266   18635 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jmg6d" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:04.598816   18635 pod_ready.go:92] pod "kube-proxy-jmg6d" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:04.598833   18635 pod_ready.go:81] duration metric: took 7.559474ms waiting for pod "kube-proxy-jmg6d" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:04.598842   18635 pod_ready.go:38] duration metric: took 13.524600915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:04.598861   18635 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:43:04.598908   18635 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:43:04.615708   18635 api_server.go:70] duration metric: took 13.845635855s to wait for apiserver process to appear ...
	I0816 22:43:04.615739   18635 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:43:04.615748   18635 api_server.go:239] Checking apiserver healthz at https://192.168.94.246:8443/healthz ...
	I0816 22:43:04.624860   18635 api_server.go:265] https://192.168.94.246:8443/healthz returned 200:
	ok
	I0816 22:43:04.626456   18635 api_server.go:139] control plane version: v1.14.0
	I0816 22:43:04.626478   18635 api_server.go:129] duration metric: took 10.733471ms to wait for apiserver health ...
	I0816 22:43:04.626487   18635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:43:04.631832   18635 system_pods.go:59] 4 kube-system pods found
	I0816 22:43:04.631861   18635 system_pods.go:61] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.631867   18635 system_pods.go:61] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.631877   18635 system_pods.go:61] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:04.631883   18635 system_pods.go:61] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.631892   18635 system_pods.go:74] duration metric: took 5.399191ms to wait for pod list to return data ...
	I0816 22:43:04.631901   18635 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:43:04.635992   18635 default_sa.go:45] found service account: "default"
	I0816 22:43:04.636015   18635 default_sa.go:55] duration metric: took 4.107562ms for default service account to be created ...
	I0816 22:43:04.636025   18635 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:43:04.640667   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:04.640691   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.640697   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.640704   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:04.640709   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.640726   18635 retry.go:31] will retry after 305.063636ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:04.951327   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:04.951357   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.951365   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.951377   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:04.951384   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.951402   18635 retry.go:31] will retry after 338.212508ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:05.295109   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:05.295143   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.295154   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.295165   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:05.295174   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.295193   18635 retry.go:31] will retry after 378.459802ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:05.683391   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:05.683423   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.683431   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.683442   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:05.683452   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.683472   18635 retry.go:31] will retry after 469.882201ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:06.158721   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:06.158752   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.158757   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.158765   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:06.158770   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.158786   18635 retry.go:31] will retry after 667.365439ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:06.831740   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:06.831771   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.831781   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.831790   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:06.831799   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.831818   18635 retry.go:31] will retry after 597.243124ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:07.434457   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:07.434482   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:07.434487   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:07.434494   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:07.434499   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:07.434513   18635 retry.go:31] will retry after 789.889932ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:07.075753   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:09.575726   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:06.996973   18929 out.go:204]   - Booting up control plane ...
	I0816 22:43:08.229786   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:08.229819   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:08.229827   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:08.229840   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:08.229845   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:08.229863   18635 retry.go:31] will retry after 951.868007ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:09.187817   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:09.187852   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:09.187862   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:09.187873   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:09.187878   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:09.187895   18635 retry.go:31] will retry after 1.341783893s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:10.534567   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:10.534608   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:10.534615   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:10.534627   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:10.534634   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:10.534652   18635 retry.go:31] will retry after 1.876813009s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:12.418546   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:12.418572   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:12.418579   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:12.418590   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:12.418596   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:12.418612   18635 retry.go:31] will retry after 2.6934314s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:11.066632   19204 pod_ready.go:81] duration metric: took 4m0.008014176s waiting for pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace to be "Ready" ...
	E0816 22:43:11.066660   19204 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:43:11.066679   19204 pod_ready.go:38] duration metric: took 4m27.623084704s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:11.066704   19204 kubeadm.go:604] restartCluster took 5m3.415779611s
	W0816 22:43:11.066819   19204 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:43:11.066856   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0816 22:43:14.269873   19204 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.202987817s)
	I0816 22:43:14.269950   19204 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:43:14.288386   19204 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:43:14.288469   19204 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:43:14.333856   19204 cri.go:76] found id: ""
	I0816 22:43:14.333935   19204 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:43:14.343737   19204 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:43:14.352599   19204 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:43:14.352646   19204 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0816 22:43:14.930093   19204 out.go:204]   - Generating certificates and keys ...
	I0816 22:43:15.118830   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:15.118862   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:15.118872   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:15.118882   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:15.118889   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:15.118907   18635 retry.go:31] will retry after 2.494582248s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:17.619339   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:17.619375   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:17.619384   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:17.619395   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:17.619403   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:17.619422   18635 retry.go:31] will retry after 3.420895489s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:15.729873   19204 out.go:204]   - Booting up control plane ...
	I0816 22:43:21.047237   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:21.047269   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:21.047276   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:21.047287   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:21.047294   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:21.047310   18635 retry.go:31] will retry after 4.133785681s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:22.636356   18929 out.go:204]   - Configuring RBAC rules ...
	I0816 22:43:23.371015   18929 cni.go:93] Creating CNI manager for ""
	I0816 22:43:23.371043   18929 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:43:23.373006   18929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:43:23.373076   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:43:23.386712   18929 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:43:23.415554   18929 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:43:23.415693   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:23.415773   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=embed-certs-20210816223333-6986 minikube.k8s.io/updated_at=2021_08_16T22_43_23_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:24.042222   18929 ops.go:34] apiserver oom_adj: -16
	I0816 22:43:24.042207   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:24.699493   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:25.199877   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:25.699926   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:25.189718   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:25.189751   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:25.189758   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:25.189768   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:25.189775   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:25.189795   18635 retry.go:31] will retry after 5.595921491s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:26.199444   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:26.699547   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:27.199378   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:27.699869   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:28.200370   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:28.700011   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:29.199882   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:29.700066   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:30.200161   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:30.699359   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:31.887219   19204 out.go:204]   - Configuring RBAC rules ...
	I0816 22:43:32.571790   19204 cni.go:93] Creating CNI manager for ""
	I0816 22:43:32.571817   19204 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:43:30.804838   18635 system_pods.go:86] 8 kube-system pods found
	I0816 22:43:30.804876   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:30.804884   18635 system_pods.go:89] "etcd-old-k8s-version-20210816223154-6986" [61433b17-fee3-11eb-bea8-525400bf2371] Pending
	I0816 22:43:30.804891   18635 system_pods.go:89] "kube-apiserver-old-k8s-version-20210816223154-6986" [5e48aade-fee3-11eb-bea8-525400bf2371] Pending
	I0816 22:43:30.804897   18635 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210816223154-6986" [5e48d2c6-fee3-11eb-bea8-525400bf2371] Pending
	I0816 22:43:30.804902   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:30.804908   18635 system_pods.go:89] "kube-scheduler-old-k8s-version-20210816223154-6986" [60110a1b-fee3-11eb-bea8-525400bf2371] Pending
	I0816 22:43:30.804918   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:30.804925   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:30.804943   18635 retry.go:31] will retry after 6.3346098s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:32.573869   19204 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:43:32.573957   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:43:32.585155   19204 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:43:32.601590   19204 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:43:32.601652   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:32.601677   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=default-k8s-different-port-20210816223418-6986 minikube.k8s.io/updated_at=2021_08_16T22_43_32_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:32.631177   19204 ops.go:34] apiserver oom_adj: -16
	I0816 22:43:33.115780   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:33.764597   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:34.265250   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:34.764717   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:31.200176   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:31.700178   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:32.200029   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:32.699789   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:33.200341   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:33.699709   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:34.199959   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:34.699635   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:35.199401   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:35.497436   18929 kubeadm.go:985] duration metric: took 12.081799779s to wait for elevateKubeSystemPrivileges.
	I0816 22:43:35.497485   18929 kubeadm.go:392] StartCluster complete in 5m48.214136187s
	I0816 22:43:35.497508   18929 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:43:35.497637   18929 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:43:35.500294   18929 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:43:36.034903   18929 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20210816223333-6986" rescaled to 1
	I0816 22:43:36.034983   18929 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.105.129 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:43:36.036731   18929 out.go:177] * Verifying Kubernetes components...
	I0816 22:43:36.035020   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:43:36.036813   18929 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:43:36.035043   18929 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:43:36.036910   18929 addons.go:59] Setting storage-provisioner=true in profile "embed-certs-20210816223333-6986"
	I0816 22:43:36.036926   18929 addons.go:59] Setting default-storageclass=true in profile "embed-certs-20210816223333-6986"
	I0816 22:43:36.036937   18929 addons.go:135] Setting addon storage-provisioner=true in "embed-certs-20210816223333-6986"
	I0816 22:43:36.036942   18929 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20210816223333-6986"
	W0816 22:43:36.036948   18929 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:43:36.036978   18929 host.go:66] Checking if "embed-certs-20210816223333-6986" exists ...
	I0816 22:43:36.037395   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.037430   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.037443   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.037445   18929 addons.go:59] Setting metrics-server=true in profile "embed-certs-20210816223333-6986"
	I0816 22:43:36.037462   18929 addons.go:135] Setting addon metrics-server=true in "embed-certs-20210816223333-6986"
	I0816 22:43:36.037464   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	W0816 22:43:36.037471   18929 addons.go:147] addon metrics-server should already be in state true
	I0816 22:43:36.036912   18929 addons.go:59] Setting dashboard=true in profile "embed-certs-20210816223333-6986"
	I0816 22:43:36.037504   18929 addons.go:135] Setting addon dashboard=true in "embed-certs-20210816223333-6986"
	W0816 22:43:36.037509   18929 addons.go:147] addon dashboard should already be in state true
	I0816 22:43:36.035195   18929 config.go:177] Loaded profile config "embed-certs-20210816223333-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:43:36.037546   18929 host.go:66] Checking if "embed-certs-20210816223333-6986" exists ...
	I0816 22:43:36.037680   18929 host.go:66] Checking if "embed-certs-20210816223333-6986" exists ...
	I0816 22:43:36.037934   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.037965   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.038094   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.038128   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.052922   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45791
	I0816 22:43:36.053393   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.053967   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.053996   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.054376   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.054999   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.055044   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.057606   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34275
	I0816 22:43:36.057965   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.058476   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.058504   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.058889   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.059518   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.059555   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.061564   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35827
	I0816 22:43:36.061953   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.062427   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.062448   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.062776   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.062919   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetState
	I0816 22:43:36.067479   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35939
	I0816 22:43:36.067916   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.068397   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.068420   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.068756   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.069319   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.069365   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.070906   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46287
	I0816 22:43:36.071487   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.071940   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.071962   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.072029   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43295
	I0816 22:43:36.072345   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.072346   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.072513   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetState
	I0816 22:43:36.072847   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.072869   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.073161   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.073332   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetState
	I0816 22:43:36.077207   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:43:36.077344   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:43:36.079180   18929 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:43:36.080548   18929 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:43:36.079295   18929 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:43:36.080582   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:43:36.080603   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:43:36.081867   18929 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:43:36.081926   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:43:36.081938   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:43:36.081954   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:43:36.082858   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37409
	I0816 22:43:36.083299   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.083845   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.083868   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.084213   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.084387   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetState
	I0816 22:43:36.086977   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.087634   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:43:36.087699   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:43:36.087722   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.087759   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:43:36.087803   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:43:36.087949   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:43:37.147660   18635 system_pods.go:86] 8 kube-system pods found
	I0816 22:43:37.147701   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:37.147710   18635 system_pods.go:89] "etcd-old-k8s-version-20210816223154-6986" [61433b17-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:37.147718   18635 system_pods.go:89] "kube-apiserver-old-k8s-version-20210816223154-6986" [5e48aade-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:37.147724   18635 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210816223154-6986" [5e48d2c6-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:37.147730   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:37.147736   18635 system_pods.go:89] "kube-scheduler-old-k8s-version-20210816223154-6986" [60110a1b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:37.147745   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:37.147755   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:37.147764   18635 system_pods.go:126] duration metric: took 32.511733609s to wait for k8s-apps to be running ...
	I0816 22:43:37.147783   18635 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:43:37.147836   18635 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:43:37.164370   18635 system_svc.go:56] duration metric: took 16.579311ms WaitForService to wait for kubelet.
	I0816 22:43:37.164403   18635 kubeadm.go:547] duration metric: took 46.394336574s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:43:37.164433   18635 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:43:37.168097   18635 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:43:37.168129   18635 node_conditions.go:123] node cpu capacity is 2
	I0816 22:43:37.168144   18635 node_conditions.go:105] duration metric: took 3.70586ms to run NodePressure ...
	I0816 22:43:37.168156   18635 start.go:231] waiting for startup goroutines ...
	I0816 22:43:37.217144   18635 start.go:462] kubectl: 1.20.5, cluster: 1.14.0 (minor skew: 6)
	I0816 22:43:37.219305   18635 out.go:177] 
	W0816 22:43:37.219480   18635 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.14.0.
	I0816 22:43:37.221278   18635 out.go:177]   - Want kubectl v1.14.0? Try 'minikube kubectl -- get pods -A'
	I0816 22:43:37.223010   18635 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-20210816223154-6986" cluster and "default" namespace by default
	I0816 22:43:35.265455   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:35.765450   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:36.264605   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:36.764601   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:37.265049   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:37.764595   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:38.265287   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:38.764994   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:39.265056   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:39.765476   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:36.089340   18929 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:43:36.089400   18929 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:43:36.089413   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:43:36.088130   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:43:36.089430   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:43:36.088890   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.089473   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:43:36.089505   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.089703   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:43:36.089898   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:43:36.090090   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:43:36.090267   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:43:36.094836   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.095297   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:43:36.095323   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.095512   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:43:36.095645   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:43:36.095759   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:43:36.095851   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:43:36.098057   18929 addons.go:135] Setting addon default-storageclass=true in "embed-certs-20210816223333-6986"
	W0816 22:43:36.098079   18929 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:43:36.098104   18929 host.go:66] Checking if "embed-certs-20210816223333-6986" exists ...
	I0816 22:43:36.098559   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.098603   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.109741   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39575
	I0816 22:43:36.110180   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.110794   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.110819   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.111190   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.111821   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.111864   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.123621   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45183
	I0816 22:43:36.124053   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.124503   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.124519   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.124829   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.125022   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetState
	I0816 22:43:36.128253   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:43:36.128476   18929 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:43:36.128494   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:43:36.128513   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:43:36.134156   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.134493   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:43:36.134521   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.134626   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:43:36.134834   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:43:36.135010   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:43:36.135176   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:43:36.334796   18929 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:43:36.462564   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:43:36.462619   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:43:36.510558   18929 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:43:36.513334   18929 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:43:36.513356   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:43:36.551208   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:43:36.551256   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:43:36.570189   18929 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:43:36.570216   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:43:36.657218   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:43:36.657250   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:43:36.692197   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:43:36.692227   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:43:36.774111   18929 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20210816223333-6986" to be "Ready" ...
	I0816 22:43:36.774340   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:43:36.790295   18929 node_ready.go:49] node "embed-certs-20210816223333-6986" has status "Ready":"True"
	I0816 22:43:36.790320   18929 node_ready.go:38] duration metric: took 16.177495ms waiting for node "embed-certs-20210816223333-6986" to be "Ready" ...
	I0816 22:43:36.790335   18929 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:36.797297   18929 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-6zv97" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:36.858095   18929 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:43:36.858120   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:43:36.981263   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:43:36.981292   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:43:37.007726   18929 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:43:37.229172   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:43:37.229198   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:43:37.412428   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:43:37.412464   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:43:37.604490   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:43:37.604516   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:43:37.864046   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:43:37.864072   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:43:37.954509   18929 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:43:38.628148   18929 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.293312909s)
	I0816 22:43:38.628197   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:38.628206   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:38.628466   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:38.628488   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:38.628499   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:38.628509   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:38.628847   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:38.628869   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:38.797491   18929 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.286893282s)
	I0816 22:43:38.797551   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:38.797565   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:38.797846   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Closing plugin on server side
	I0816 22:43:38.797888   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:38.797896   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:38.797904   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:38.797913   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:38.798184   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:38.798203   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:38.798216   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:38.798226   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:38.798467   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:38.798483   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:38.814757   18929 pod_ready.go:102] pod "coredns-558bd4d5db-6zv97" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:39.223137   18929 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.448766478s)
	I0816 22:43:39.223172   18929 start.go:728] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS
	I0816 22:43:39.504206   18929 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.496431754s)
	I0816 22:43:39.504273   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:39.504297   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:39.504564   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:39.504585   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:39.504598   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:39.504611   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:39.504854   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:39.504863   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Closing plugin on server side
	I0816 22:43:39.504875   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:39.504890   18929 addons.go:313] Verifying addon metrics-server=true in "embed-certs-20210816223333-6986"
	I0816 22:43:39.815632   18929 pod_ready.go:97] error getting pod "coredns-558bd4d5db-6zv97" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-6zv97" not found
	I0816 22:43:39.815668   18929 pod_ready.go:81] duration metric: took 3.018337051s waiting for pod "coredns-558bd4d5db-6zv97" in "kube-system" namespace to be "Ready" ...
	E0816 22:43:39.815681   18929 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-6zv97" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-6zv97" not found
	I0816 22:43:39.815691   18929 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-mfshm" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:40.809470   18929 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.854902802s)
	I0816 22:43:40.809543   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:40.809566   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:40.811279   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:40.811299   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:40.811310   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:40.811320   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:40.811328   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Closing plugin on server side
	I0816 22:43:40.811538   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:40.811553   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:40.811561   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Closing plugin on server side
	I0816 22:43:40.813830   18929 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0816 22:43:40.813854   18929 addons.go:344] enableAddons completed in 4.778818205s
	I0816 22:43:41.867317   18929 pod_ready.go:102] pod "coredns-558bd4d5db-mfshm" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:43.368862   18929 pod_ready.go:92] pod "coredns-558bd4d5db-mfshm" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:43.368890   18929 pod_ready.go:81] duration metric: took 3.553191611s waiting for pod "coredns-558bd4d5db-mfshm" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.368903   18929 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.378704   18929 pod_ready.go:92] pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:43.378725   18929 pod_ready.go:81] duration metric: took 9.814161ms waiting for pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.378739   18929 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.402730   18929 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:43.402755   18929 pod_ready.go:81] duration metric: took 24.005322ms waiting for pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.402769   18929 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.411087   18929 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:43.411108   18929 pod_ready.go:81] duration metric: took 8.330836ms waiting for pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.411120   18929 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zwcwz" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.420161   18929 pod_ready.go:92] pod "kube-proxy-zwcwz" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:43.420183   18929 pod_ready.go:81] duration metric: took 9.054321ms waiting for pod "kube-proxy-zwcwz" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.420195   18929 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.764290   18929 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:43.764315   18929 pod_ready.go:81] duration metric: took 344.109074ms waiting for pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.764327   18929 pod_ready.go:38] duration metric: took 6.973978865s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:43.764347   18929 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:43:43.764398   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:43:43.785185   18929 api_server.go:70] duration metric: took 7.750163085s to wait for apiserver process to appear ...
	I0816 22:43:43.785212   18929 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:43:43.785222   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:43:43.795735   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 200:
	ok
	I0816 22:43:43.797225   18929 api_server.go:139] control plane version: v1.21.3
	I0816 22:43:43.797243   18929 api_server.go:129] duration metric: took 12.025112ms to wait for apiserver health ...
	I0816 22:43:43.797252   18929 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:43:43.971546   18929 system_pods.go:59] 8 kube-system pods found
	I0816 22:43:43.971578   18929 system_pods.go:61] "coredns-558bd4d5db-mfshm" [cb9ac226-b63f-4de1-b4af-b8e2bf280d95] Running
	I0816 22:43:43.971584   18929 system_pods.go:61] "etcd-embed-certs-20210816223333-6986" [333a4b44-c417-46e6-8653-c1d24391c7ca] Running
	I0816 22:43:43.971590   18929 system_pods.go:61] "kube-apiserver-embed-certs-20210816223333-6986" [414c58e9-8dcf-4f0c-9a5e-ff21a694067d] Running
	I0816 22:43:43.971596   18929 system_pods.go:61] "kube-controller-manager-embed-certs-20210816223333-6986" [c80d067f-ee6a-4e6a-b062-c2ff64c6bd81] Running
	I0816 22:43:43.971601   18929 system_pods.go:61] "kube-proxy-zwcwz" [f85562a3-8576-4dbf-a2b2-3f6a3d199df3] Running
	I0816 22:43:43.971608   18929 system_pods.go:61] "kube-scheduler-embed-certs-20210816223333-6986" [92b9b318-e6e4-4891-9609-5fe26593bcdb] Running
	I0816 22:43:43.971621   18929 system_pods.go:61] "metrics-server-7c784ccb57-qfrpw" [abb75357-7b33-4327-aa7f-8e9c15a192f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:43.971632   18929 system_pods.go:61] "storage-provisioner" [f3fc0038-f88e-416f-81e3-fb387b0e010a] Running
	I0816 22:43:43.971639   18929 system_pods.go:74] duration metric: took 174.380965ms to wait for pod list to return data ...
	I0816 22:43:43.971647   18929 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:43:44.164541   18929 default_sa.go:45] found service account: "default"
	I0816 22:43:44.164564   18929 default_sa.go:55] duration metric: took 192.910888ms for default service account to be created ...
	I0816 22:43:44.164584   18929 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:43:44.367138   18929 system_pods.go:86] 8 kube-system pods found
	I0816 22:43:44.367172   18929 system_pods.go:89] "coredns-558bd4d5db-mfshm" [cb9ac226-b63f-4de1-b4af-b8e2bf280d95] Running
	I0816 22:43:44.367181   18929 system_pods.go:89] "etcd-embed-certs-20210816223333-6986" [333a4b44-c417-46e6-8653-c1d24391c7ca] Running
	I0816 22:43:44.367190   18929 system_pods.go:89] "kube-apiserver-embed-certs-20210816223333-6986" [414c58e9-8dcf-4f0c-9a5e-ff21a694067d] Running
	I0816 22:43:44.367197   18929 system_pods.go:89] "kube-controller-manager-embed-certs-20210816223333-6986" [c80d067f-ee6a-4e6a-b062-c2ff64c6bd81] Running
	I0816 22:43:44.367204   18929 system_pods.go:89] "kube-proxy-zwcwz" [f85562a3-8576-4dbf-a2b2-3f6a3d199df3] Running
	I0816 22:43:44.367211   18929 system_pods.go:89] "kube-scheduler-embed-certs-20210816223333-6986" [92b9b318-e6e4-4891-9609-5fe26593bcdb] Running
	I0816 22:43:44.367229   18929 system_pods.go:89] "metrics-server-7c784ccb57-qfrpw" [abb75357-7b33-4327-aa7f-8e9c15a192f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:44.367239   18929 system_pods.go:89] "storage-provisioner" [f3fc0038-f88e-416f-81e3-fb387b0e010a] Running
	I0816 22:43:44.367248   18929 system_pods.go:126] duration metric: took 202.65882ms to wait for k8s-apps to be running ...
	I0816 22:43:44.367259   18929 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:43:44.367307   18929 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:43:44.381654   18929 system_svc.go:56] duration metric: took 14.389765ms WaitForService to wait for kubelet.
	I0816 22:43:44.381678   18929 kubeadm.go:547] duration metric: took 8.346663342s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:43:44.381702   18929 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:43:44.563414   18929 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:43:44.563447   18929 node_conditions.go:123] node cpu capacity is 2
	I0816 22:43:44.563461   18929 node_conditions.go:105] duration metric: took 181.753579ms to run NodePressure ...
	I0816 22:43:44.563473   18929 start.go:231] waiting for startup goroutines ...
	I0816 22:43:44.614237   18929 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:43:44.616600   18929 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210816223333-6986" cluster and "default" namespace by default
	I0816 22:43:40.264690   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:40.765483   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:41.264614   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:41.764581   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:42.265395   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:42.764674   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:43.265319   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:43.765315   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:44.265020   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:44.764726   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:45.265506   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:45.590495   19204 kubeadm.go:985] duration metric: took 12.988891092s to wait for elevateKubeSystemPrivileges.
	I0816 22:43:45.590529   19204 kubeadm.go:392] StartCluster complete in 5m37.979340771s
	I0816 22:43:45.590548   19204 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:43:45.590642   19204 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:43:45.593541   19204 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0816 22:43:45.657324   19204 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0816 22:43:46.665400   19204 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210816223418-6986" rescaled to 1
	I0816 22:43:46.665482   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:43:46.665515   19204 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:43:46.667711   19204 out.go:177] * Verifying Kubernetes components...
	I0816 22:43:46.667773   19204 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:43:46.665580   19204 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:43:46.667837   19204 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210816223418-6986"
	I0816 22:43:46.667852   19204 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210816223418-6986"
	I0816 22:43:46.667860   19204 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210816223418-6986"
	I0816 22:43:46.667868   19204 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210816223418-6986"
	I0816 22:43:46.667838   19204 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210816223418-6986"
	I0816 22:43:46.667885   19204 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210816223418-6986"
	W0816 22:43:46.667894   19204 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:43:46.667927   19204 host.go:66] Checking if "default-k8s-different-port-20210816223418-6986" exists ...
	I0816 22:43:46.668329   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.668351   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.668368   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.668386   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.665780   19204 config.go:177] Loaded profile config "default-k8s-different-port-20210816223418-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:43:46.668451   19204 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210816223418-6986"
	W0816 22:43:46.667870   19204 addons.go:147] addon dashboard should already be in state true
	I0816 22:43:46.668473   19204 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210816223418-6986"
	W0816 22:43:46.668483   19204 addons.go:147] addon metrics-server should already be in state true
	I0816 22:43:46.668492   19204 host.go:66] Checking if "default-k8s-different-port-20210816223418-6986" exists ...
	I0816 22:43:46.668520   19204 host.go:66] Checking if "default-k8s-different-port-20210816223418-6986" exists ...
	I0816 22:43:46.668864   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.668905   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.668950   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.668990   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.689974   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42521
	I0816 22:43:46.690669   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.691280   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.691314   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.691679   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.692276   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.692315   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.692464   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43883
	I0816 22:43:46.693031   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.693526   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.693553   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.693968   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.694137   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetState
	I0816 22:43:46.705753   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38281
	I0816 22:43:46.706172   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.707061   19204 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210816223418-6986"
	W0816 22:43:46.707082   19204 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:43:46.707108   19204 host.go:66] Checking if "default-k8s-different-port-20210816223418-6986" exists ...
	I0816 22:43:46.707465   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.707503   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.707516   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.707545   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.707576   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44259
	I0816 22:43:46.707845   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.707927   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.708047   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetState
	I0816 22:43:46.708442   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.708498   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.708850   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40817
	I0816 22:43:46.708875   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.709295   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.709795   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.709831   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.709802   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.709896   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.710319   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.710841   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.710885   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.712390   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:43:46.714605   19204 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:43:46.714712   19204 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:43:46.714728   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:43:46.714749   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:43:46.721156   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.721380   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:43:46.721409   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.721629   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:43:46.721735   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:43:46.721864   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:43:46.721924   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:43:46.729886   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41981
	I0816 22:43:46.730281   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.730709   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.730725   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.731239   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.731805   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.731916   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.731997   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0816 22:43:46.732368   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.732825   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.732847   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.733209   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.733449   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetState
	I0816 22:43:46.734015   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:32775
	I0816 22:43:46.734430   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.735096   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.735146   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.735539   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.735710   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetState
	I0816 22:43:46.737120   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:43:46.739055   19204 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:43:46.738848   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:43:46.739120   19204 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:43:46.739134   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:43:46.739158   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:43:46.740784   19204 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:43:46.742206   19204 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:43:46.742257   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:43:46.742270   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:43:46.742288   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:43:46.745626   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.746290   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:43:46.746384   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.746885   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:43:46.747264   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41917
	I0816 22:43:46.747662   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.748053   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.748065   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.748398   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.748516   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetState
	I0816 22:43:46.748635   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.749011   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:43:46.749029   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.749196   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:43:46.749309   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:43:46.749445   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:43:46.749576   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:43:46.751724   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:43:46.751878   19204 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:43:46.751885   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:43:46.751895   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:43:46.755264   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:43:46.755420   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:43:46.755543   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:43:46.757535   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.757844   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:43:46.757880   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.757947   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:43:46.758111   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:43:46.758232   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:43:46.758336   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:43:46.912338   19204 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:43:46.928084   19204 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210816223418-6986" to be "Ready" ...
	I0816 22:43:46.928162   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:43:46.932654   19204 node_ready.go:49] node "default-k8s-different-port-20210816223418-6986" has status "Ready":"True"
	I0816 22:43:46.932677   19204 node_ready.go:38] duration metric: took 4.560299ms waiting for node "default-k8s-different-port-20210816223418-6986" to be "Ready" ...
	I0816 22:43:46.932688   19204 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:46.938801   19204 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-jvhn9" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:46.959212   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:43:46.959239   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:43:46.980444   19204 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:43:46.992693   19204 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:43:46.992712   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:43:47.139897   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:43:47.140481   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:43:47.283513   19204 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:43:47.283548   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:43:47.307099   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:43:47.307124   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:43:47.337466   19204 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:43:47.337491   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:43:47.400423   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:43:47.400457   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:43:47.428735   19204 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:43:47.473437   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:43:47.473470   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:43:47.809043   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:43:47.809076   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:43:48.151719   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:43:48.151750   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:43:48.433383   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:43:48.433418   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:43:48.581909   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:43:48.581937   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:43:48.707807   19204 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:43:48.976179   19204 pod_ready.go:102] pod "coredns-558bd4d5db-jvhn9" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:49.433748   19204 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.505548681s)
	I0816 22:43:49.433787   19204 start.go:728] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS
	I0816 22:43:49.434692   19204 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.522311056s)
	I0816 22:43:49.434732   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:49.434747   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:49.435098   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:49.435119   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:49.435131   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:49.435132   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Closing plugin on server side
	I0816 22:43:49.435143   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:49.435401   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:49.435415   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:49.725705   19204 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.745219751s)
	I0816 22:43:49.725764   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:49.725779   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:49.726085   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:49.726107   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:49.726124   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:49.726137   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:49.726388   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:49.726414   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:49.726427   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:49.726428   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Closing plugin on server side
	I0816 22:43:49.726440   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:49.726685   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Closing plugin on server side
	I0816 22:43:49.726730   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:49.726743   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:50.084265   19204 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.655473327s)
	I0816 22:43:50.084320   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:50.084336   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:50.084638   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:50.084661   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:50.084671   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:50.084682   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:50.084904   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:50.084916   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:50.084935   19204 addons.go:313] Verifying addon metrics-server=true in "default-k8s-different-port-20210816223418-6986"
	I0816 22:43:51.000374   19204 pod_ready.go:92] pod "coredns-558bd4d5db-jvhn9" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:51.000409   19204 pod_ready.go:81] duration metric: took 4.061576094s waiting for pod "coredns-558bd4d5db-jvhn9" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.000426   19204 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.042067   19204 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:51.042087   19204 pod_ready.go:81] duration metric: took 41.651304ms waiting for pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.042101   19204 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.076320   19204 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.368445421s)
	I0816 22:43:51.076371   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:51.076392   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:51.076636   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:51.076655   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:51.076666   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:51.076676   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:51.076961   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:51.076973   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Closing plugin on server side
	I0816 22:43:51.076983   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:51.078741   19204 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0816 22:43:51.078768   19204 addons.go:344] enableAddons completed in 4.413194371s
	I0816 22:43:51.095847   19204 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:51.095868   19204 pod_ready.go:81] duration metric: took 53.758678ms waiting for pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.095885   19204 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.120117   19204 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:51.120136   19204 pod_ready.go:81] duration metric: took 24.240957ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.120151   19204 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qhsq8" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.137975   19204 pod_ready.go:92] pod "kube-proxy-qhsq8" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:51.138000   19204 pod_ready.go:81] duration metric: took 17.840798ms waiting for pod "kube-proxy-qhsq8" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.138013   19204 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.361490   19204 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:51.361513   19204 pod_ready.go:81] duration metric: took 223.49089ms waiting for pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.361522   19204 pod_ready.go:38] duration metric: took 4.428821843s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:51.361535   19204 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:43:51.361593   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:43:51.390742   19204 api_server.go:70] duration metric: took 4.724914292s to wait for apiserver process to appear ...
	I0816 22:43:51.390767   19204 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:43:51.390777   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:43:51.398481   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 200:
	ok
	I0816 22:43:51.402341   19204 api_server.go:139] control plane version: v1.21.3
	I0816 22:43:51.402366   19204 api_server.go:129] duration metric: took 11.590514ms to wait for apiserver health ...
	I0816 22:43:51.402376   19204 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:43:51.553058   19204 system_pods.go:59] 8 kube-system pods found
	I0816 22:43:51.553092   19204 system_pods.go:61] "coredns-558bd4d5db-jvhn9" [3c48c2dc-4beb-4359-aadc-1365db48feac] Running
	I0816 22:43:51.553102   19204 system_pods.go:61] "etcd-default-k8s-different-port-20210816223418-6986" [1ec44a23-d678-413f-bc79-1b3b24c77422] Running
	I0816 22:43:51.553109   19204 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210816223418-6986" [9246fbb2-2bd6-42a5-ad37-66c828343f50] Running
	I0816 22:43:51.553116   19204 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210816223418-6986" [974dfb9a-e4b0-4aee-862f-6b0b06f6491e] Running
	I0816 22:43:51.553122   19204 system_pods.go:61] "kube-proxy-qhsq8" [9abb9351-b721-48bb-94b9-887b5afc7584] Running
	I0816 22:43:51.553128   19204 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210816223418-6986" [b73b2730-6367-4d16-90c7-4ba6ec17f6ef] Running
	I0816 22:43:51.553142   19204 system_pods.go:61] "metrics-server-7c784ccb57-pbxnr" [fa2d27a5-b243-4a8f-9450-b834d1ce5bb0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:51.553155   19204 system_pods.go:61] "storage-provisioner" [a88a523b-5707-46b9-b7cf-6931db0d4487] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:43:51.553166   19204 system_pods.go:74] duration metric: took 150.783692ms to wait for pod list to return data ...
	I0816 22:43:51.553177   19204 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:43:51.749364   19204 default_sa.go:45] found service account: "default"
	I0816 22:43:51.749393   19204 default_sa.go:55] duration metric: took 196.209447ms for default service account to be created ...
	I0816 22:43:51.749405   19204 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:43:51.953876   19204 system_pods.go:86] 8 kube-system pods found
	I0816 22:43:51.953914   19204 system_pods.go:89] "coredns-558bd4d5db-jvhn9" [3c48c2dc-4beb-4359-aadc-1365db48feac] Running
	I0816 22:43:51.953923   19204 system_pods.go:89] "etcd-default-k8s-different-port-20210816223418-6986" [1ec44a23-d678-413f-bc79-1b3b24c77422] Running
	I0816 22:43:51.953931   19204 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20210816223418-6986" [9246fbb2-2bd6-42a5-ad37-66c828343f50] Running
	I0816 22:43:51.953938   19204 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20210816223418-6986" [974dfb9a-e4b0-4aee-862f-6b0b06f6491e] Running
	I0816 22:43:51.953949   19204 system_pods.go:89] "kube-proxy-qhsq8" [9abb9351-b721-48bb-94b9-887b5afc7584] Running
	I0816 22:43:51.953958   19204 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20210816223418-6986" [b73b2730-6367-4d16-90c7-4ba6ec17f6ef] Running
	I0816 22:43:51.953971   19204 system_pods.go:89] "metrics-server-7c784ccb57-pbxnr" [fa2d27a5-b243-4a8f-9450-b834d1ce5bb0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:51.953985   19204 system_pods.go:89] "storage-provisioner" [a88a523b-5707-46b9-b7cf-6931db0d4487] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:43:51.954000   19204 system_pods.go:126] duration metric: took 204.589729ms to wait for k8s-apps to be running ...
	I0816 22:43:51.954014   19204 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:43:51.954066   19204 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:43:51.982620   19204 system_svc.go:56] duration metric: took 28.600519ms WaitForService to wait for kubelet.
	I0816 22:43:51.982645   19204 kubeadm.go:547] duration metric: took 5.316821186s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:43:51.982666   19204 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:43:52.146042   19204 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:43:52.146082   19204 node_conditions.go:123] node cpu capacity is 2
	I0816 22:43:52.146096   19204 node_conditions.go:105] duration metric: took 163.423737ms to run NodePressure ...
	I0816 22:43:52.146108   19204 start.go:231] waiting for startup goroutines ...
	I0816 22:43:52.193059   19204 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:43:52.195545   19204 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20210816223418-6986" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	9672b955ced7e       523cad1a4df73       25 seconds ago       Exited              dashboard-metrics-scraper   3                   2f7b780c4307b
	e30d5daacc87f       9a07b5b4bfac0       About a minute ago   Running             kubernetes-dashboard        0                   12379a58cddde
	7be9f927a71c2       6e38f40d628db       About a minute ago   Exited              storage-provisioner         0                   2a86821519acf
	6ea2a5a98c778       eb516548c180f       About a minute ago   Running             coredns                     0                   e40105c5a7146
	c56f64c3fe77c       5cd54e388abaf       About a minute ago   Running             kube-proxy                  0                   3a1b01ca39e0d
	5d87176f403fd       2c4adeb21b4ff       About a minute ago   Running             etcd                        0                   59dc9b6f995b1
	1a95abd0c58fe       b95b1efa0436b       About a minute ago   Running             kube-controller-manager     0                   d44da39af5252
	1d8ecdfb3c614       00638a24688b0       About a minute ago   Running             kube-scheduler              0                   d8a20faccc886
	7237baa217ae7       ecf910f40d6e0       About a minute ago   Running             kube-apiserver              0                   3a4ddc9391f6b
	
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2021-08-16 22:36:28 UTC, end at Mon 2021-08-16 22:44:05 UTC. --
	Aug 16 22:43:35 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:35.586887534Z" level=info msg="Finish piping \"stderr\" of container exec \"757fb3eb1897f1b79f6161a8ded77dce2d35ec992963ef69a2d1b2eaf87d689a\""
	Aug 16 22:43:35 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:35.587063058Z" level=info msg="Finish piping \"stdout\" of container exec \"757fb3eb1897f1b79f6161a8ded77dce2d35ec992963ef69a2d1b2eaf87d689a\""
	Aug 16 22:43:35 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:35.587896854Z" level=info msg="Exec process \"757fb3eb1897f1b79f6161a8ded77dce2d35ec992963ef69a2d1b2eaf87d689a\" exits with exit code 0 and error <nil>"
	Aug 16 22:43:35 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:35.591908272Z" level=info msg="ExecSync for \"5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36\" returns with exit code 0"
	Aug 16 22:43:39 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:39.774405602Z" level=info msg="CreateContainer within sandbox \"2f7b780c4307b02ef87b4146fa91f3e11360d6db0cbc012d1cc73f4b3bf9f70c\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,}"
	Aug 16 22:43:39 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:39.835366284Z" level=info msg="CreateContainer within sandbox \"2f7b780c4307b02ef87b4146fa91f3e11360d6db0cbc012d1cc73f4b3bf9f70c\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,} returns container id \"9672b955ced7e9fac049fbb30c5f91f4d0f06cda7931c585f9174c9e9693835e\""
	Aug 16 22:43:39 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:39.839941719Z" level=info msg="StartContainer for \"9672b955ced7e9fac049fbb30c5f91f4d0f06cda7931c585f9174c9e9693835e\""
	Aug 16 22:43:40 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:40.232575683Z" level=info msg="StartContainer for \"9672b955ced7e9fac049fbb30c5f91f4d0f06cda7931c585f9174c9e9693835e\" returns successfully"
	Aug 16 22:43:40 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:40.268869593Z" level=info msg="Finish piping stdout of container \"9672b955ced7e9fac049fbb30c5f91f4d0f06cda7931c585f9174c9e9693835e\""
	Aug 16 22:43:40 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:40.269063038Z" level=info msg="Finish piping stderr of container \"9672b955ced7e9fac049fbb30c5f91f4d0f06cda7931c585f9174c9e9693835e\""
	Aug 16 22:43:40 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:40.270862282Z" level=info msg="TaskExit event &TaskExit{ContainerID:9672b955ced7e9fac049fbb30c5f91f4d0f06cda7931c585f9174c9e9693835e,ID:9672b955ced7e9fac049fbb30c5f91f4d0f06cda7931c585f9174c9e9693835e,Pid:8287,ExitStatus:1,ExitedAt:2021-08-16 22:43:40.270131421 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:43:40 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:40.331879178Z" level=info msg="shim disconnected" id=9672b955ced7e9fac049fbb30c5f91f4d0f06cda7931c585f9174c9e9693835e
	Aug 16 22:43:40 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:40.332418749Z" level=error msg="copy shim log" error="read /proc/self/fd/115: file already closed"
	Aug 16 22:43:40 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:40.452270528Z" level=info msg="RemoveContainer for \"424c2afc39c5aef3c23a866ae1af5b1e287e6f1a825a34fb49efed7e3fbb0d0f\""
	Aug 16 22:43:40 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:40.469207115Z" level=info msg="RemoveContainer for \"424c2afc39c5aef3c23a866ae1af5b1e287e6f1a825a34fb49efed7e3fbb0d0f\" returns successfully"
	Aug 16 22:43:45 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:45.472555337Z" level=info msg="ExecSync for \"5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 16 22:43:45 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:45.645981133Z" level=info msg="Finish piping \"stderr\" of container exec \"73145ea1780a3f9b0339a69d0d67ac61b1682cc43431b53d69a7d83bb90823b5\""
	Aug 16 22:43:45 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:45.646858395Z" level=info msg="Finish piping \"stdout\" of container exec \"73145ea1780a3f9b0339a69d0d67ac61b1682cc43431b53d69a7d83bb90823b5\""
	Aug 16 22:43:45 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:45.647507032Z" level=info msg="Exec process \"73145ea1780a3f9b0339a69d0d67ac61b1682cc43431b53d69a7d83bb90823b5\" exits with exit code 0 and error <nil>"
	Aug 16 22:43:45 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:45.650882421Z" level=info msg="ExecSync for \"5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36\" returns with exit code 0"
	Aug 16 22:44:00 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:44:00.714773255Z" level=info msg="Finish piping stderr of container \"7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b\""
	Aug 16 22:44:00 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:44:00.715270451Z" level=info msg="Finish piping stdout of container \"7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b\""
	Aug 16 22:44:00 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:44:00.719966050Z" level=info msg="TaskExit event &TaskExit{ContainerID:7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b,ID:7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b,Pid:7404,ExitStatus:255,ExitedAt:2021-08-16 22:44:00.719190091 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:44:00 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:44:00.773223002Z" level=info msg="shim disconnected" id=7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b
	Aug 16 22:44:00 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:44:00.773581010Z" level=error msg="copy shim log" error="read /proc/self/fd/98: file already closed"
	
	* 
	* ==> coredns [6ea2a5a98c7784a7ad7702acc262ab5efb1fb3b94ee224698a6ebf4024bc79d9] <==
	* .:53
	2021-08-16T22:42:53.379Z [INFO] CoreDNS-1.3.1
	2021-08-16T22:42:53.380Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-16T22:42:53.380Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669
	[INFO] Reloading
	2021-08-16T22:43:27.937Z [INFO] plugin/reload: Running configuration MD5 = f060395823a948597d75c8d639586234
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.032614] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.912250] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1728 comm=systemd-network
	[  +0.736337] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.278439] vboxguest: loading out-of-tree module taints kernel.
	[  +0.011428] vboxguest: PCI device not found, probably running on physical hardware.
	[ +22.489450] systemd-fstab-generator[2070]: Ignoring "noauto" for root device
	[  +1.510829] systemd-fstab-generator[2104]: Ignoring "noauto" for root device
	[  +0.131636] systemd-fstab-generator[2117]: Ignoring "noauto" for root device
	[  +0.199221] systemd-fstab-generator[2148]: Ignoring "noauto" for root device
	[Aug16 22:37] systemd-fstab-generator[2341]: Ignoring "noauto" for root device
	[ +21.434266] kauditd_printk_skb: 20 callbacks suppressed
	[ +13.076129] kauditd_printk_skb: 116 callbacks suppressed
	[ +14.303690] kauditd_printk_skb: 20 callbacks suppressed
	[Aug16 22:38] kauditd_printk_skb: 5 callbacks suppressed
	[ +14.658718] kauditd_printk_skb: 50 callbacks suppressed
	[ +14.042008] NFSD: Unable to end grace period: -110
	[Aug16 22:42] systemd-fstab-generator[6208]: Ignoring "noauto" for root device
	[ +14.094438] tee (6649): /proc/6453/oom_adj is deprecated, please use /proc/6453/oom_score_adj instead.
	[ +15.834965] kauditd_printk_skb: 77 callbacks suppressed
	[  +6.216912] kauditd_printk_skb: 134 callbacks suppressed
	[Aug16 22:43] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.987543] systemd-fstab-generator[8373]: Ignoring "noauto" for root device
	[  +0.814258] systemd-fstab-generator[8427]: Ignoring "noauto" for root device
	[  +1.051816] systemd-fstab-generator[8478]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36] <==
	* 2021-08-16 22:42:24.919421 I | etcdserver/membership: added member 27d7fcd40abd9523 [https://192.168.94.246:2380] to cluster c6e0109e79cd232
	2021-08-16 22:42:24.923170 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-16 22:42:24.924460 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-16 22:42:24.925117 I | embed: listening for metrics on http://192.168.94.246:2381
	2021-08-16 22:42:25.583010 I | raft: 27d7fcd40abd9523 is starting a new election at term 1
	2021-08-16 22:42:25.583189 I | raft: 27d7fcd40abd9523 became candidate at term 2
	2021-08-16 22:42:25.583232 I | raft: 27d7fcd40abd9523 received MsgVoteResp from 27d7fcd40abd9523 at term 2
	2021-08-16 22:42:25.583249 I | raft: 27d7fcd40abd9523 became leader at term 2
	2021-08-16 22:42:25.583260 I | raft: raft.node: 27d7fcd40abd9523 elected leader 27d7fcd40abd9523 at term 2
	2021-08-16 22:42:25.584018 I | etcdserver: published {Name:old-k8s-version-20210816223154-6986 ClientURLs:[https://192.168.94.246:2379]} to cluster c6e0109e79cd232
	2021-08-16 22:42:25.584145 I | embed: ready to serve client requests
	2021-08-16 22:42:25.585484 I | etcdserver: setting up the initial cluster version to 3.3
	2021-08-16 22:42:25.586252 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-16 22:42:25.587019 N | etcdserver/membership: set the initial cluster version to 3.3
	2021-08-16 22:42:25.599679 I | etcdserver/api: enabled capabilities for version 3.3
	2021-08-16 22:42:25.599962 I | embed: ready to serve client requests
	2021-08-16 22:42:25.609964 I | embed: serving client requests on 192.168.94.246:2379
	proto: no coders for int
	proto: no encoder for ValueSize int [GetProperties]
	2021-08-16 22:42:36.619908 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (127.850155ms) to execute
	2021-08-16 22:42:53.028951 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/metrics-server\" " with result "range_response_count:0 size:5" took too long (173.129099ms) to execute
	2021-08-16 22:42:53.086515 W | etcdserver: read-only range request "key:\"/registry/clusterroles/edit\" " with result "range_response_count:1 size:2542" took too long (117.835584ms) to execute
	2021-08-16 22:42:53.099996 W | etcdserver: read-only range request "key:\"/registry/clusterroles/view\" " with result "range_response_count:1 size:1333" took too long (125.717317ms) to execute
	2021-08-16 22:42:53.107685 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (202.636243ms) to execute
	2021-08-16 22:42:53.115431 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-8546d8b77b.169beab9869bd7a6\" " with result "range_response_count:1 size:602" took too long (205.644523ms) to execute
	
	* 
	* ==> kernel <==
	*  22:45:09 up 8 min,  0 users,  load average: 0.51, 0.60, 0.32
	Linux old-k8s-version-20210816223154-6986 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [7237baa217ae7218864322d6893eb52cef6f4f80ba58f4e4a0683b2bb5861a85] <==
	* I0816 22:45:06.783074       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	W0816 22:45:06.783500       1 asm_amd64.s:1337] Failed to dial 127.0.0.1:2379: grpc: the connection is closing; please retry.
	I0816 22:45:06.798506       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
	I0816 22:45:06.798856       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	W0816 22:45:06.799090       1 asm_amd64.s:1337] Failed to dial 127.0.0.1:2379: grpc: the connection is closing; please retry.
	I0816 22:45:06.799105       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	W0816 22:45:06.799908       1 asm_amd64.s:1337] Failed to dial 127.0.0.1:2379: grpc: the connection is closing; please retry.
	I0816 22:45:06.807328       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
	I0816 22:45:06.807855       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	W0816 22:45:06.807988       1 asm_amd64.s:1337] Failed to dial 127.0.0.1:2379: grpc: the connection is closing; please retry.
	I0816 22:45:06.808078       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	I0816 22:45:07.667858       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0816 22:45:07.668486       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	W0816 22:45:07.739549       1 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0816 22:45:07.739566       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
	I0816 22:45:07.740483       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	I0816 22:45:07.741141       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	W0816 22:45:07.754411       1 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0816 22:45:07.754833       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
	I0816 22:45:07.754945       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	I0816 22:45:07.755151       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	I0816 22:45:08.668143       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0816 22:45:08.669169       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0816 22:45:09.668870       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0816 22:45:09.669877       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	
	* 
	* ==> kube-controller-manager [1a95abd0c58fec2e599b7fce2863703efdb6bdba4e9f4ead191033cb41d2b50d] <==
	* E0816 22:42:53.642035       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:53.642919       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"4b10e5a6-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"422", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:53.663073       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"4b1ad65f-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"423", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:42:53.687785       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:53.688209       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"4b10e5a6-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"422", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:42:53.693913       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:42:53.735371       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:53.735927       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"4b1ad65f-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:42:53.737937       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:53.738275       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"4b10e5a6-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"422", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:42:53.775198       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:53.775342       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"4b10e5a6-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"422", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:42:53.782439       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:53.782566       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"4b10e5a6-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"422", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:42:53.791273       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:53.791703       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"4b1ad65f-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:42:53.831190       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:53.831576       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"4b1ad65f-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:42:53.839990       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:53.840317       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"4b1ad65f-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:54.005057       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"4b10e5a6-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-5b494cc544-nznrc
	I0816 22:42:54.213180       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-8546d8b77b", UID:"4a9254f7-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-8546d8b77b-vpvp5
	I0816 22:42:54.883479       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"4b1ad65f-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5d8978d65d-mblkl
	E0816 22:43:20.097686       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:43:22.664721       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [c56f64c3fe77c258bef4bbf05b666c6e0f0b07c60a576cf8f489af832a850b70] <==
	* W0816 22:42:51.084922       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0816 22:42:51.097742       1 server_others.go:148] Using iptables Proxier.
	I0816 22:42:51.098735       1 server_others.go:178] Tearing down inactive rules.
	E0816 22:42:51.210478       1 proxier.go:583] Error removing iptables rules in ipvs proxier: error deleting chain "KUBE-MARK-MASQ": exit status 1: iptables: Too many links.
	I0816 22:42:51.300207       1 server.go:555] Version: v1.14.0
	I0816 22:42:51.322439       1 config.go:202] Starting service config controller
	I0816 22:42:51.322700       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0816 22:42:51.324302       1 config.go:102] Starting endpoints config controller
	I0816 22:42:51.324529       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0816 22:42:51.422986       1 controller_utils.go:1034] Caches are synced for service config controller
	I0816 22:42:51.425124       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	
	* 
	* ==> kube-scheduler [1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d] <==
	* W0816 22:42:24.914153       1 authentication.go:55] Authentication is disabled
	I0816 22:42:24.914179       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
	I0816 22:42:24.916859       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	E0816 22:42:29.482445       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:42:29.485780       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:42:29.486161       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:42:29.486661       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:42:29.486989       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:42:29.493926       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:42:29.494087       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:42:29.495550       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:42:29.497002       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:42:29.502073       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:42:30.485713       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:42:30.488902       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:42:30.494802       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:42:30.496082       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:42:30.499909       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:42:30.500544       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:42:30.502068       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:42:30.502973       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:42:30.504270       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:42:30.506877       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0816 22:42:32.321137       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0816 22:42:32.421433       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:36:28 UTC, end at Mon 2021-08-16 22:45:10 UTC. --
	Aug 16 22:42:54 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:42:54.823340    6216 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:42:54 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:42:54.823394    6216 pod_workers.go:190] Error syncing pod 4b7525e3-fee3-11eb-bea8-525400bf2371 ("metrics-server-8546d8b77b-vpvp5_kube-system(4b7525e3-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Aug 16 22:42:54 old-k8s-version-20210816223154-6986 kubelet[6216]: I0816 22:42:54.986126    6216 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/4bdd50b5-fee3-11eb-bea8-525400bf2371-tmp-volume") pod "kubernetes-dashboard-5d8978d65d-mblkl" (UID: "4bdd50b5-fee3-11eb-bea8-525400bf2371")
	Aug 16 22:42:54 old-k8s-version-20210816223154-6986 kubelet[6216]: I0816 22:42:54.986343    6216 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-dllwm" (UniqueName: "kubernetes.io/secret/4bdd50b5-fee3-11eb-bea8-525400bf2371-kubernetes-dashboard-token-dllwm") pod "kubernetes-dashboard-5d8978d65d-mblkl" (UID: "4bdd50b5-fee3-11eb-bea8-525400bf2371")
	Aug 16 22:42:55 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:42:55.071945    6216 pod_workers.go:190] Error syncing pod 4b7525e3-fee3-11eb-bea8-525400bf2371 ("metrics-server-8546d8b77b-vpvp5_kube-system(4b7525e3-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:43:03 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:03.328438    6216 pod_workers.go:190] Error syncing pod 4b4f43e1-fee3-11eb-bea8-525400bf2371 ("dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"
	Aug 16 22:43:04 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:04.313833    6216 pod_workers.go:190] Error syncing pod 4b4f43e1-fee3-11eb-bea8-525400bf2371 ("dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"
	Aug 16 22:43:05 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:05.317443    6216 pod_workers.go:190] Error syncing pod 4b4f43e1-fee3-11eb-bea8-525400bf2371 ("dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"
	Aug 16 22:43:06 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:06.777480    6216 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:43:06 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:06.777711    6216 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:43:06 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:06.777911    6216 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:43:06 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:06.777960    6216 pod_workers.go:190] Error syncing pod 4b7525e3-fee3-11eb-bea8-525400bf2371 ("metrics-server-8546d8b77b-vpvp5_kube-system(4b7525e3-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Aug 16 22:43:18 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:18.362406    6216 pod_workers.go:190] Error syncing pod 4b4f43e1-fee3-11eb-bea8-525400bf2371 ("dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"
	Aug 16 22:43:18 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:18.767919    6216 pod_workers.go:190] Error syncing pod 4b7525e3-fee3-11eb-bea8-525400bf2371 ("metrics-server-8546d8b77b-vpvp5_kube-system(4b7525e3-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:43:24 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:24.338079    6216 pod_workers.go:190] Error syncing pod 4b4f43e1-fee3-11eb-bea8-525400bf2371 ("dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"
	Aug 16 22:43:30 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:30.780007    6216 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:43:30 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:30.781232    6216 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:43:30 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:30.781840    6216 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:43:30 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:30.782016    6216 pod_workers.go:190] Error syncing pod 4b7525e3-fee3-11eb-bea8-525400bf2371 ("metrics-server-8546d8b77b-vpvp5_kube-system(4b7525e3-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Aug 16 22:43:40 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:40.445275    6216 pod_workers.go:190] Error syncing pod 4b4f43e1-fee3-11eb-bea8-525400bf2371 ("dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"
	Aug 16 22:43:43 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:43.774125    6216 pod_workers.go:190] Error syncing pod 4b7525e3-fee3-11eb-bea8-525400bf2371 ("metrics-server-8546d8b77b-vpvp5_kube-system(4b7525e3-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:43:44 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:44.338559    6216 pod_workers.go:190] Error syncing pod 4b4f43e1-fee3-11eb-bea8-525400bf2371 ("dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"
	Aug 16 22:43:48 old-k8s-version-20210816223154-6986 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:43:48 old-k8s-version-20210816223154-6986 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:43:48 old-k8s-version-20210816223154-6986 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [e30d5daacc87f690bc273d129f168487c51648dd2e6bc8d0870a0848215e02e9] <==
	* 2021/08/16 22:42:56 Using namespace: kubernetes-dashboard
	2021/08/16 22:42:56 Using in-cluster config to connect to apiserver
	2021/08/16 22:42:56 Using secret token for csrf signing
	2021/08/16 22:42:56 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/16 22:42:56 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/16 22:42:56 Successful initial request to the apiserver, version: v1.14.0
	2021/08/16 22:42:56 Generating JWE encryption key
	2021/08/16 22:42:56 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/16 22:42:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/16 22:42:56 Initializing JWE encryption key from synchronized object
	2021/08/16 22:42:56 Creating in-cluster Sidecar client
	2021/08/16 22:42:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:42:56 Serving insecurely on HTTP port: 9090
	2021/08/16 22:43:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:42:56 Starting overwatch
	
	* 
	* ==> storage-provisioner [7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 92 [sync.Cond.Wait, 1 minutes]:
	sync.runtime_notifyListWait(0xc0000d2350, 0xc000000003)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc0000d2340)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0000945a0, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0003b2c80, 0x18e5530, 0xc00042a7c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0003fde40)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0003fde40, 0x18b3d60, 0xc0004264e0, 0x17a0e01, 0xc000102300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0003fde40, 0x3b9aca00, 0x0, 0xc000095801, 0xc000102300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0003fde40, 0x3b9aca00, 0xc000102300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:45:05.913320   20420 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: "\n** stderr ** \nError from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210816223154-6986 -n old-k8s-version-20210816223154-6986
E0816 22:45:14.984629    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816223156-6986/client.crt: no such file or directory
E0816 22:45:25.225026    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816223156-6986/client.crt: no such file or directory
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210816223154-6986 -n old-k8s-version-20210816223154-6986: exit status 2 (17.266978439s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:45:27.666677   21038 status.go:422] Error apiserver status: https://192.168.94.246:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20210816223154-6986 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p old-k8s-version-20210816223154-6986 logs -n 25: exit status 110 (1m7.735650076s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable metrics-server -p                          | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:44 UTC | Mon, 16 Aug 2021 22:34:45 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816223156-6986                 | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:31:56 UTC | Mon, 16 Aug 2021 22:35:04 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:33:33 UTC | Mon, 16 Aug 2021 22:35:08 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:16 UTC | Mon, 16 Aug 2021 22:35:17 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:19 UTC | Mon, 16 Aug 2021 22:35:20 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:18 UTC | Mon, 16 Aug 2021 22:35:42 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:51 UTC | Mon, 16 Aug 2021 22:35:52 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:45 UTC | Mon, 16 Aug 2021 22:36:18 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:18 UTC | Mon, 16 Aug 2021 22:36:18 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:17 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:20 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:52 UTC | Mon, 16 Aug 2021 22:37:25 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:37:25 UTC | Mon, 16 Aug 2021 22:37:25 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816223156-6986                 | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:42:31 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:42:42 UTC | Mon, 16 Aug 2021 22:42:42 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:18 UTC | Mon, 16 Aug 2021 22:43:37 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:43:44 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:43:47 UTC | Mon, 16 Aug 2021 22:43:47 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:37:25 UTC | Mon, 16 Aug 2021 22:43:52 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:43:55 UTC | Mon, 16 Aug 2021 22:43:55 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:44:03 UTC | Mon, 16 Aug 2021 22:44:03 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:44:30 UTC | Mon, 16 Aug 2021 22:44:30 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:44:31 UTC | Mon, 16 Aug 2021 22:44:31 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:44:31
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:44:31.336463   20709 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:44:31.336533   20709 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:44:31.336537   20709 out.go:311] Setting ErrFile to fd 2...
	I0816 22:44:31.336542   20709 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:44:31.336660   20709 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:44:31.336912   20709 out.go:305] Setting JSON to false
	I0816 22:44:31.372871   20709 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":5233,"bootTime":1629148638,"procs":183,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:44:31.372979   20709 start.go:121] virtualization: kvm guest
	I0816 22:44:31.375339   20709 out.go:177] * [newest-cni-20210816224431-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:44:31.376976   20709 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:44:31.375475   20709 notify.go:169] Checking for updates...
	I0816 22:44:31.378360   20709 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:44:31.379751   20709 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:44:31.381087   20709 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:44:31.381541   20709 config.go:177] Loaded profile config "default-k8s-different-port-20210816223418-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:44:31.381666   20709 config.go:177] Loaded profile config "embed-certs-20210816223333-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:44:31.381762   20709 config.go:177] Loaded profile config "old-k8s-version-20210816223154-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0816 22:44:31.381800   20709 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:44:31.415103   20709 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 22:44:31.415129   20709 start.go:278] selected driver: kvm2
	I0816 22:44:31.415136   20709 start.go:751] validating driver "kvm2" against <nil>
	I0816 22:44:31.415156   20709 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 22:44:31.417123   20709 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:44:31.417269   20709 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 22:44:31.428378   20709 install.go:137] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2 version is 1.22.0
	I0816 22:44:31.428425   20709 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	W0816 22:44:31.428448   20709 out.go:242] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0816 22:44:31.428583   20709 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 22:44:31.428608   20709 cni.go:93] Creating CNI manager for ""
	I0816 22:44:31.428616   20709 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:44:31.428624   20709 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 22:44:31.428632   20709 start_flags.go:277] config:
	{Name:newest-cni-20210816224431-6986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816224431-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:44:31.428727   20709 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:44:31.430575   20709 out.go:177] * Starting control plane node newest-cni-20210816224431-6986 in cluster newest-cni-20210816224431-6986
	I0816 22:44:31.430597   20709 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 22:44:31.430639   20709 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0816 22:44:31.430657   20709 cache.go:56] Caching tarball of preloaded images
	I0816 22:44:31.430757   20709 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 22:44:31.430778   20709 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on containerd
	I0816 22:44:31.430895   20709 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/config.json ...
	I0816 22:44:31.430918   20709 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/config.json: {Name:mkc9663018589074668a46d91251fc73622d0917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:44:31.431076   20709 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:44:31.431108   20709 start.go:313] acquiring machines lock for newest-cni-20210816224431-6986: {Name:mk808edd60d1305a42bb85791729eff4573dbb15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 22:44:31.431156   20709 start.go:317] acquired machines lock for "newest-cni-20210816224431-6986" in 32.129µs
	I0816 22:44:31.431179   20709 start.go:89] Provisioning new machine with config: &{Name:newest-cni-20210816224431-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816224431-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 Kubernetes
Version:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0816 22:44:31.431268   20709 start.go:126] createHost starting for "" (driver="kvm2")
	I0816 22:44:31.433330   20709 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 22:44:31.433460   20709 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:44:31.433512   20709 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:44:31.443654   20709 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46495
	I0816 22:44:31.444063   20709 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:44:31.444565   20709 main.go:130] libmachine: Using API Version  1
	I0816 22:44:31.444586   20709 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:44:31.444925   20709 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:44:31.445103   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetMachineName
	I0816 22:44:31.445239   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:44:31.445356   20709 start.go:160] libmachine.API.Create for "newest-cni-20210816224431-6986" (driver="kvm2")
	I0816 22:44:31.445394   20709 client.go:168] LocalClient.Create starting
	I0816 22:44:31.445428   20709 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem
	I0816 22:44:31.445462   20709 main.go:130] libmachine: Decoding PEM data...
	I0816 22:44:31.445480   20709 main.go:130] libmachine: Parsing certificate...
	I0816 22:44:31.445628   20709 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem
	I0816 22:44:31.445657   20709 main.go:130] libmachine: Decoding PEM data...
	I0816 22:44:31.445682   20709 main.go:130] libmachine: Parsing certificate...
	I0816 22:44:31.445747   20709 main.go:130] libmachine: Running pre-create checks...
	I0816 22:44:31.445765   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .PreCreateCheck
	I0816 22:44:31.446091   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetConfigRaw
	I0816 22:44:31.446492   20709 main.go:130] libmachine: Creating machine...
	I0816 22:44:31.446507   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Create
	I0816 22:44:31.446664   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Creating KVM machine...
	I0816 22:44:31.449397   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found existing default KVM network
	I0816 22:44:31.450933   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.450787   20733 network.go:240] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b4:06:d9}}
	I0816 22:44:31.452400   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.452341   20733 network.go:240] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f8:b7:da}}
	I0816 22:44:31.453464   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.453369   20733 network.go:240] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ee:53:1e}}
	I0816 22:44:31.454534   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.454466   20733 network.go:240] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:45:d3:67}}
	I0816 22:44:31.456524   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.456452   20733 network.go:240] skipping subnet 192.168.83.0/24 that is taken: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 Interface:{IfaceName:virbr5 IfaceIPv4:192.168.83.1 IfaceMTU:1500 IfaceMAC:52:54:00:ea:76:4e}}
	I0816 22:44:31.457759   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.457675   20733 network.go:240] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 Interface:{IfaceName:virbr6 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:52:54:00:6c:86:bd}}
	I0816 22:44:31.458795   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.458728   20733 network.go:240] skipping subnet 192.168.105.0/24 that is taken: &{IP:192.168.105.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.105.0/24 Gateway:192.168.105.1 ClientMin:192.168.105.2 ClientMax:192.168.105.254 Broadcast:192.168.105.255 Interface:{IfaceName:virbr7 IfaceIPv4:192.168.105.1 IfaceMTU:1500 IfaceMAC:52:54:00:ea:b2:03}}
	I0816 22:44:31.460187   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.460086   20733 network.go:288] reserving subnet 192.168.116.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.116.0:0xc0000be0b8] misses:0}
	I0816 22:44:31.460215   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.460127   20733 network.go:235] using free private subnet 192.168.116.0/24: &{IP:192.168.116.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.116.0/24 Gateway:192.168.116.1 ClientMin:192.168.116.2 ClientMax:192.168.116.254 Broadcast:192.168.116.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0816 22:44:31.497376   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | trying to create private KVM network mk-newest-cni-20210816224431-6986 192.168.116.0/24...
	I0816 22:44:31.783525   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | private KVM network mk-newest-cni-20210816224431-6986 192.168.116.0/24 created
	I0816 22:44:31.783566   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986 ...
	I0816 22:44:31.783588   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.783470   20733 common.go:101] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:44:31.783620   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0816 22:44:31.783744   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso...
	I0816 22:44:31.986209   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.986090   20733 common.go:108] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa...
	I0816 22:44:32.210064   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:32.209964   20733 common.go:114] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/newest-cni-20210816224431-6986.rawdisk...
	I0816 22:44:32.210106   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Writing magic tar header
	I0816 22:44:32.210184   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Writing SSH key tar header
	I0816 22:44:32.210290   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:32.210235   20733 common.go:128] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986 ...
	I0816 22:44:32.210373   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986
	I0816 22:44:32.210394   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines
	I0816 22:44:32.210410   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986 (perms=drwx------)
	I0816 22:44:32.210437   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:44:32.210461   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a
	I0816 22:44:32.210482   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines (perms=drwxr-xr-x)
	I0816 22:44:32.210497   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 22:44:32.210520   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube (perms=drwxr-xr-x)
	I0816 22:44:32.210544   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a (perms=drwxr-xr-x)
	I0816 22:44:32.210559   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0816 22:44:32.210573   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Checking permissions on dir: /home/jenkins
	I0816 22:44:32.210588   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Checking permissions on dir: /home
	I0816 22:44:32.210601   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Skipping /home - not owner
	I0816 22:44:32.210622   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 22:44:32.210643   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Creating domain...
	I0816 22:44:32.236885   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:cb:20:33 in network default
	I0816 22:44:32.237605   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Ensuring networks are active...
	I0816 22:44:32.237633   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:32.239810   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Ensuring network default is active
	I0816 22:44:32.240283   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Ensuring network mk-newest-cni-20210816224431-6986 is active
	I0816 22:44:32.240922   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Getting domain xml...
	I0816 22:44:32.242965   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Creating domain...
	I0816 22:44:32.738898   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Waiting to get IP...
	I0816 22:44:32.739904   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:32.740448   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:32.740502   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:32.740433   20733 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0816 22:44:33.004929   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:33.005411   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:33.005575   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:33.005505   20733 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0816 22:44:33.387930   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:33.388329   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:33.388355   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:33.388297   20733 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0816 22:44:33.812967   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:33.813440   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:33.813476   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:33.813379   20733 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0816 22:44:34.287851   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:34.288312   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:34.288339   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:34.288266   20733 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0816 22:44:34.876901   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:34.877366   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:34.877411   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:34.877323   20733 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0816 22:44:35.713609   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:35.714113   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:35.714144   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:35.714065   20733 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0816 22:44:36.462453   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:36.463031   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:36.463062   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:36.462902   20733 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0816 22:44:37.451848   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:37.452318   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:37.452342   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:37.452285   20733 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0816 22:44:38.643196   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:38.643649   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:38.643677   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:38.643613   20733 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0816 22:44:40.323071   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:40.323607   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:40.323641   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:40.323541   20733 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0816 22:44:42.671108   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:42.671597   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:42.671621   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:42.671578   20733 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0816 22:44:46.039560   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:46.040060   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:46.040083   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:46.040019   20733 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	I0816 22:44:49.159411   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:49.160097   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Found IP for machine: 192.168.116.132
	I0816 22:44:49.160124   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Reserving static IP address...
	I0816 22:44:49.160140   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has current primary IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:49.160491   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find host DHCP lease matching {name: "newest-cni-20210816224431-6986", mac: "52:54:00:50:9a:fc", ip: "192.168.116.132"} in network mk-newest-cni-20210816224431-6986
	I0816 22:44:49.209047   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Getting to WaitForSSH function...
	I0816 22:44:49.209087   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Reserved static IP address: 192.168.116.132
	I0816 22:44:49.209101   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Waiting for SSH to be available...
	I0816 22:44:49.214159   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:49.214565   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:44:46 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:minikube Clientid:01:52:54:00:50:9a:fc}
	I0816 22:44:49.214601   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:49.214728   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Using SSH client type: external
	I0816 22:44:49.214773   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa (-rw-------)
	I0816 22:44:49.214812   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.116.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:44:49.214861   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | About to run SSH command:
	I0816 22:44:49.214873   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | exit 0
	I0816 22:44:49.350971   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | SSH cmd err, output: <nil>: 
	I0816 22:44:49.351459   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) KVM machine creation complete!
	I0816 22:44:49.351538   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetConfigRaw
	I0816 22:44:49.352069   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:44:49.352234   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:44:49.352393   20709 main.go:130] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0816 22:44:49.352406   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetState
	I0816 22:44:49.355162   20709 main.go:130] libmachine: Detecting operating system of created instance...
	I0816 22:44:49.355184   20709 main.go:130] libmachine: Waiting for SSH to be available...
	I0816 22:44:49.355192   20709 main.go:130] libmachine: Getting to WaitForSSH function...
	I0816 22:44:49.355202   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:44:49.360108   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:49.360445   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:44:46 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:44:49.360472   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:49.360602   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:44:49.360728   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:44:49.360855   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:44:49.360956   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:44:49.361069   20709 main.go:130] libmachine: Using SSH client type: native
	I0816 22:44:49.361266   20709 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.116.132 22 <nil> <nil>}
	I0816 22:44:49.361279   20709 main.go:130] libmachine: About to run SSH command:
	exit 0
	I0816 22:44:49.482395   20709 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:44:49.482425   20709 main.go:130] libmachine: Detecting the provisioner...
	I0816 22:44:49.482437   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:44:49.487991   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:49.488306   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:44:46 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:44:49.488361   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:49.488472   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:44:49.488654   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:44:49.488829   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:44:49.488969   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:44:49.489086   20709 main.go:130] libmachine: Using SSH client type: native
	I0816 22:44:49.489235   20709 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.116.132 22 <nil> <nil>}
	I0816 22:44:49.489258   20709 main.go:130] libmachine: About to run SSH command:
	cat /etc/os-release
	I0816 22:44:49.621380   20709 main.go:130] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2020.02.12
	ID=buildroot
	VERSION_ID=2020.02.12
	PRETTY_NAME="Buildroot 2020.02.12"
	
	I0816 22:44:49.621441   20709 main.go:130] libmachine: found compatible host: buildroot
	I0816 22:44:49.621455   20709 main.go:130] libmachine: Provisioning with buildroot...
	I0816 22:44:49.621463   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetMachineName
	I0816 22:44:49.621740   20709 buildroot.go:166] provisioning hostname "newest-cni-20210816224431-6986"
	I0816 22:44:49.621766   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetMachineName
	I0816 22:44:49.621966   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:44:49.627415   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:49.627751   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:44:46 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:44:49.627781   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:49.627904   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:44:49.628091   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:44:49.628275   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:44:49.628433   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:44:49.628601   20709 main.go:130] libmachine: Using SSH client type: native
	I0816 22:44:49.628768   20709 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.116.132 22 <nil> <nil>}
	I0816 22:44:49.628786   20709 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210816224431-6986 && echo "newest-cni-20210816224431-6986" | sudo tee /etc/hostname
	I0816 22:44:49.763579   20709 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210816224431-6986
	
	I0816 22:44:49.763615   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:44:49.768883   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:49.769263   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:44:46 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:44:49.769295   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:49.769482   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:44:49.769666   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:44:49.769819   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:44:49.769968   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:44:49.770107   20709 main.go:130] libmachine: Using SSH client type: native
	I0816 22:44:49.770257   20709 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.116.132 22 <nil> <nil>}
	I0816 22:44:49.770281   20709 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210816224431-6986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210816224431-6986/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210816224431-6986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:44:49.898158   20709 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:44:49.898190   20709 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:44:49.898213   20709 buildroot.go:174] setting up certificates
	I0816 22:44:49.898237   20709 provision.go:83] configureAuth start
	I0816 22:44:49.898255   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetMachineName
	I0816 22:44:49.898537   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetIP
	I0816 22:44:49.903610   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:49.903950   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:44:46 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:44:49.903987   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:49.904098   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:44:49.908392   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:49.908699   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:44:46 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:44:49.908726   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:49.908796   20709 provision.go:138] copyHostCerts
	I0816 22:44:49.908862   20709 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:44:49.908875   20709 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:44:49.908945   20709 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:44:49.909053   20709 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:44:49.909066   20709 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:44:49.909102   20709 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:44:49.909177   20709 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:44:49.909188   20709 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:44:49.909213   20709 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
	I0816 22:44:49.909273   20709 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210816224431-6986 san=[192.168.116.132 192.168.116.132 localhost 127.0.0.1 minikube newest-cni-20210816224431-6986]
	I0816 22:44:50.072488   20709 provision.go:172] copyRemoteCerts
	I0816 22:44:50.072561   20709 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:44:50.072594   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:44:50.077464   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:50.077848   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:44:46 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:44:50.077885   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:50.078033   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:44:50.078211   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:44:50.078362   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:44:50.078486   20709 sshutil.go:53] new ssh client: &{IP:192.168.116.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa Username:docker}
	I0816 22:44:50.166099   20709 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:44:50.184195   20709 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0816 22:44:50.200317   20709 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:44:50.218506   20709 provision.go:86] duration metric: configureAuth took 320.256048ms
	I0816 22:44:50.218523   20709 buildroot.go:189] setting minikube options for container-runtime
	I0816 22:44:50.218699   20709 config.go:177] Loaded profile config "newest-cni-20210816224431-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0816 22:44:50.218717   20709 main.go:130] libmachine: Checking connection to Docker...
	I0816 22:44:50.218735   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetURL
	I0816 22:44:50.221388   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Using libvirt version 3000000
	I0816 22:44:50.226065   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:50.226385   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:44:46 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:44:50.226419   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:50.226546   20709 main.go:130] libmachine: Docker is up and running!
	I0816 22:44:50.226562   20709 main.go:130] libmachine: Reticulating splines...
	I0816 22:44:50.226568   20709 client.go:171] LocalClient.Create took 18.781164659s
	I0816 22:44:50.226586   20709 start.go:168] duration metric: libmachine.API.Create for "newest-cni-20210816224431-6986" took 18.781230466s
	I0816 22:44:50.226594   20709 start.go:267] post-start starting for "newest-cni-20210816224431-6986" (driver="kvm2")
	I0816 22:44:50.226603   20709 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:44:50.226621   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:44:50.226831   20709 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:44:50.226863   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:44:50.231033   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:50.231325   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:44:46 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:44:50.231353   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:50.231447   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:44:50.231598   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:44:50.231740   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:44:50.231853   20709 sshutil.go:53] new ssh client: &{IP:192.168.116.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa Username:docker}
	I0816 22:44:50.322201   20709 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:44:50.327507   20709 info.go:137] Remote host: Buildroot 2020.02.12
	I0816 22:44:50.327536   20709 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:44:50.327595   20709 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:44:50.327684   20709 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0816 22:44:50.327788   20709 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:44:50.335183   20709 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:44:50.352931   20709 start.go:270] post-start completed in 126.325239ms
	I0816 22:44:50.352983   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetConfigRaw
	I0816 22:44:50.353547   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetIP
	I0816 22:44:50.359168   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:50.359521   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:44:46 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:44:50.359554   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:50.359772   20709 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/config.json ...
	I0816 22:44:50.359951   20709 start.go:129] duration metric: createHost completed in 18.928670605s
	I0816 22:44:50.359966   20709 start.go:80] releasing machines lock for "newest-cni-20210816224431-6986", held for 18.928799978s
	I0816 22:44:50.360002   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:44:50.360187   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetIP
	I0816 22:44:50.365248   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:50.365532   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:44:46 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:44:50.365559   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:50.365728   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:44:50.365896   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:44:50.366361   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:44:50.366566   20709 ssh_runner.go:149] Run: systemctl --version
	I0816 22:44:50.366590   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:44:50.366613   20709 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:44:50.366656   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:44:50.373088   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:50.373114   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:50.373541   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:44:46 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:44:50.373572   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:44:46 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:44:50.373594   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:50.373678   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:50.373794   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:44:50.373908   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:44:50.375577   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:44:50.375577   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:44:50.375734   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:44:50.375743   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:44:50.375914   20709 sshutil.go:53] new ssh client: &{IP:192.168.116.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa Username:docker}
	I0816 22:44:50.375914   20709 sshutil.go:53] new ssh client: &{IP:192.168.116.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa Username:docker}
	I0816 22:44:50.469903   20709 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 22:44:50.470021   20709 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:44:54.476687   20709 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.006638379s)
	I0816 22:44:54.476809   20709 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0816 22:44:54.476872   20709 ssh_runner.go:149] Run: which lz4
	I0816 22:44:54.481334   20709 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0816 22:44:54.485769   20709 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0816 22:44:54.485808   20709 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (945588089 bytes)
	I0816 22:44:58.413775   20709 containerd.go:546] Took 3.932473 seconds to copy over tarball
	I0816 22:44:58.413862   20709 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 22:45:07.691057   20709 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (9.277161186s)
	I0816 22:45:07.691098   20709 containerd.go:553] Took 9.277286 seconds t extract the tarball
	I0816 22:45:07.691111   20709 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0816 22:45:07.770581   20709 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:45:07.915586   20709 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:45:07.965213   20709 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0816 22:45:08.013765   20709 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0816 22:45:08.032621   20709 docker.go:153] disabling docker service ...
	I0816 22:45:08.032713   20709 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:45:08.045376   20709 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:45:08.055881   20709 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:45:08.194537   20709 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:45:08.331450   20709 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:45:08.345687   20709 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:45:08.364434   20709 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0816 22:45:08.379502   20709 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:45:08.389390   20709 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:45:08.389442   20709 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:45:08.405833   20709 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:45:08.412136   20709 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:45:08.534426   20709 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:45:09.834398   20709 ssh_runner.go:189] Completed: sudo systemctl restart containerd: (1.299933572s)
	I0816 22:45:09.834433   20709 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0816 22:45:09.834484   20709 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:45:09.839966   20709 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0816 22:45:10.945235   20709 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:45:10.950881   20709 start.go:413] Will wait 60s for crictl version
	I0816 22:45:10.950956   20709 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:45:10.984742   20709 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0816 22:45:10.984805   20709 ssh_runner.go:149] Run: containerd --version
	I0816 22:45:11.021588   20709 ssh_runner.go:149] Run: containerd --version
	I0816 22:45:11.053103   20709 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on containerd 1.4.9 ...
	I0816 22:45:11.053170   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetIP
	I0816 22:45:11.058835   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:45:11.059249   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:44:46 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:45:11.059280   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:45:11.059416   20709 ssh_runner.go:149] Run: grep 192.168.116.1	host.minikube.internal$ /etc/hosts
	I0816 22:45:11.064282   20709 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.116.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:45:11.077251   20709 out.go:177]   - kubelet.network-plugin=cni
	I0816 22:45:11.078744   20709 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0816 22:45:11.078822   20709 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 22:45:11.078912   20709 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:45:11.110740   20709 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:45:11.110759   20709 containerd.go:517] Images already preloaded, skipping extraction
	I0816 22:45:11.110804   20709 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:45:11.143881   20709 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:45:11.143899   20709 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:45:11.143956   20709 ssh_runner.go:149] Run: sudo crictl info
	I0816 22:45:11.177624   20709 cni.go:93] Creating CNI manager for ""
	I0816 22:45:11.177648   20709 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:45:11.177657   20709 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0816 22:45:11.177672   20709 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.116.132 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210816224431-6986 NodeName:newest-cni-20210816224431-6986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.116.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true l
eader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.116.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:45:11.177833   20709 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.116.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20210816224431-6986"
	  kubeletExtraArgs:
	    node-ip: 192.168.116.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.116.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:45:11.177953   20709 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210816224431-6986 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.116.132 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816224431-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:45:11.178018   20709 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0816 22:45:11.186301   20709 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:45:11.186357   20709 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:45:11.193679   20709 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (589 bytes)
	I0816 22:45:11.206138   20709 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0816 22:45:11.218205   20709 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I0816 22:45:11.230219   20709 ssh_runner.go:149] Run: grep 192.168.116.132	control-plane.minikube.internal$ /etc/hosts
	I0816 22:45:11.234387   20709 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.116.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:45:11.245062   20709 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986 for IP: 192.168.116.132
	I0816 22:45:11.245116   20709 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:45:11.245135   20709 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:45:11.245191   20709 certs.go:297] generating minikube-user signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/client.key
	I0816 22:45:11.245206   20709 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/client.crt with IP's: []
	I0816 22:45:11.410376   20709 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/client.crt ...
	I0816 22:45:11.410407   20709 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/client.crt: {Name:mk2939b11332bb1d1f2e6f43d9e17f461dfc7cf7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:45:11.410618   20709 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/client.key ...
	I0816 22:45:11.410638   20709 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/client.key: {Name:mkd90dad775d897ce757fa0a52fcb3ce72fcd6a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:45:11.410732   20709 certs.go:297] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/apiserver.key.c979f591
	I0816 22:45:11.410745   20709 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/apiserver.crt.c979f591 with IP's: [192.168.116.132 10.96.0.1 127.0.0.1 10.0.0.1]
	I0816 22:45:11.778487   20709 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/apiserver.crt.c979f591 ...
	I0816 22:45:11.778522   20709 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/apiserver.crt.c979f591: {Name:mk88be2af8363ba8f09a279b58d1470defa72b8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:45:11.778718   20709 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/apiserver.key.c979f591 ...
	I0816 22:45:11.778736   20709 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/apiserver.key.c979f591: {Name:mk5bbe929e454888a209aa0365f1f4aa073bc610 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:45:11.778842   20709 certs.go:308] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/apiserver.crt.c979f591 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/apiserver.crt
	I0816 22:45:11.778918   20709 certs.go:312] copying /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/apiserver.key.c979f591 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/apiserver.key
	I0816 22:45:11.779008   20709 certs.go:297] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/proxy-client.key
	I0816 22:45:11.779020   20709 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/proxy-client.crt with IP's: []
	I0816 22:45:11.842842   20709 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/proxy-client.crt ...
	I0816 22:45:11.842868   20709 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/proxy-client.crt: {Name:mk7fd2bf64b258b3f862351984ba70abca58445a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:45:11.843031   20709 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/proxy-client.key ...
	I0816 22:45:11.843043   20709 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/proxy-client.key: {Name:mk7148bf15efb63f027b56dab738851ffba9fc3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:45:11.843198   20709 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem (1338 bytes)
	W0816 22:45:11.843239   20709 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
	I0816 22:45:11.843261   20709 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 22:45:11.843286   20709 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:45:11.843308   20709 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:45:11.843331   20709 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1675 bytes)
	I0816 22:45:11.843372   20709 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:45:11.844266   20709 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:45:11.863172   20709 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 22:45:11.880289   20709 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:45:11.897347   20709 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 22:45:11.914003   20709 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:45:11.930907   20709 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 22:45:11.948766   20709 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:45:11.966290   20709 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 22:45:11.983732   20709 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
	I0816 22:45:12.000882   20709 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
	I0816 22:45:12.018112   20709 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:45:12.034146   20709 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:45:12.045964   20709 ssh_runner.go:149] Run: openssl version
	I0816 22:45:12.051936   20709 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
	I0816 22:45:12.060176   20709 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6986.pem
	I0816 22:45:12.064805   20709 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:49 /usr/share/ca-certificates/6986.pem
	I0816 22:45:12.064866   20709 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
	I0816 22:45:12.070976   20709 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
	I0816 22:45:12.079405   20709 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
	I0816 22:45:12.087184   20709 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/69862.pem
	I0816 22:45:12.092128   20709 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:49 /usr/share/ca-certificates/69862.pem
	I0816 22:45:12.092199   20709 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
	I0816 22:45:12.098658   20709 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:45:12.106781   20709 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:45:12.114955   20709 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:45:12.119751   20709 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:42 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:45:12.119795   20709 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:45:12.126067   20709 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:45:12.134292   20709 kubeadm.go:390] StartCluster: {Name:newest-cni-20210816224431-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0
-rc.0 ClusterName:newest-cni-20210816224431-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.116.132 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:45:12.134381   20709 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0816 22:45:12.134453   20709 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:45:12.169562   20709 cri.go:76] found id: ""
	I0816 22:45:12.169625   20709 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:45:12.178624   20709 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:45:12.185688   20709 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:45:12.192695   20709 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:45:12.192738   20709 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0816 22:45:22.411887   20709 out.go:204]   - Generating certificates and keys ...
	I0816 22:45:25.305626   20709 out.go:204]   - Booting up control plane ...
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	9672b955ced7e       523cad1a4df73       About a minute ago   Exited              dashboard-metrics-scraper   3                   2f7b780c4307b
	e30d5daacc87f       9a07b5b4bfac0       2 minutes ago        Running             kubernetes-dashboard        0                   12379a58cddde
	7be9f927a71c2       6e38f40d628db       2 minutes ago        Exited              storage-provisioner         0                   2a86821519acf
	6ea2a5a98c778       eb516548c180f       2 minutes ago        Running             coredns                     0                   e40105c5a7146
	c56f64c3fe77c       5cd54e388abaf       2 minutes ago        Running             kube-proxy                  0                   3a1b01ca39e0d
	5d87176f403fd       2c4adeb21b4ff       3 minutes ago        Running             etcd                        0                   59dc9b6f995b1
	1a95abd0c58fe       b95b1efa0436b       3 minutes ago        Running             kube-controller-manager     0                   d44da39af5252
	1d8ecdfb3c614       00638a24688b0       3 minutes ago        Running             kube-scheduler              0                   d8a20faccc886
	7237baa217ae7       ecf910f40d6e0       3 minutes ago        Running             kube-apiserver              0                   3a4ddc9391f6b
	
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2021-08-16 22:36:28 UTC, end at Mon 2021-08-16 22:45:28 UTC. --
	Aug 16 22:43:35 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:35.586887534Z" level=info msg="Finish piping \"stderr\" of container exec \"757fb3eb1897f1b79f6161a8ded77dce2d35ec992963ef69a2d1b2eaf87d689a\""
	Aug 16 22:43:35 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:35.587063058Z" level=info msg="Finish piping \"stdout\" of container exec \"757fb3eb1897f1b79f6161a8ded77dce2d35ec992963ef69a2d1b2eaf87d689a\""
	Aug 16 22:43:35 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:35.587896854Z" level=info msg="Exec process \"757fb3eb1897f1b79f6161a8ded77dce2d35ec992963ef69a2d1b2eaf87d689a\" exits with exit code 0 and error <nil>"
	Aug 16 22:43:35 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:35.591908272Z" level=info msg="ExecSync for \"5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36\" returns with exit code 0"
	Aug 16 22:43:39 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:39.774405602Z" level=info msg="CreateContainer within sandbox \"2f7b780c4307b02ef87b4146fa91f3e11360d6db0cbc012d1cc73f4b3bf9f70c\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,}"
	Aug 16 22:43:39 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:39.835366284Z" level=info msg="CreateContainer within sandbox \"2f7b780c4307b02ef87b4146fa91f3e11360d6db0cbc012d1cc73f4b3bf9f70c\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:3,} returns container id \"9672b955ced7e9fac049fbb30c5f91f4d0f06cda7931c585f9174c9e9693835e\""
	Aug 16 22:43:39 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:39.839941719Z" level=info msg="StartContainer for \"9672b955ced7e9fac049fbb30c5f91f4d0f06cda7931c585f9174c9e9693835e\""
	Aug 16 22:43:40 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:40.232575683Z" level=info msg="StartContainer for \"9672b955ced7e9fac049fbb30c5f91f4d0f06cda7931c585f9174c9e9693835e\" returns successfully"
	Aug 16 22:43:40 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:40.268869593Z" level=info msg="Finish piping stdout of container \"9672b955ced7e9fac049fbb30c5f91f4d0f06cda7931c585f9174c9e9693835e\""
	Aug 16 22:43:40 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:40.269063038Z" level=info msg="Finish piping stderr of container \"9672b955ced7e9fac049fbb30c5f91f4d0f06cda7931c585f9174c9e9693835e\""
	Aug 16 22:43:40 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:40.270862282Z" level=info msg="TaskExit event &TaskExit{ContainerID:9672b955ced7e9fac049fbb30c5f91f4d0f06cda7931c585f9174c9e9693835e,ID:9672b955ced7e9fac049fbb30c5f91f4d0f06cda7931c585f9174c9e9693835e,Pid:8287,ExitStatus:1,ExitedAt:2021-08-16 22:43:40.270131421 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:43:40 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:40.331879178Z" level=info msg="shim disconnected" id=9672b955ced7e9fac049fbb30c5f91f4d0f06cda7931c585f9174c9e9693835e
	Aug 16 22:43:40 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:40.332418749Z" level=error msg="copy shim log" error="read /proc/self/fd/115: file already closed"
	Aug 16 22:43:40 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:40.452270528Z" level=info msg="RemoveContainer for \"424c2afc39c5aef3c23a866ae1af5b1e287e6f1a825a34fb49efed7e3fbb0d0f\""
	Aug 16 22:43:40 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:40.469207115Z" level=info msg="RemoveContainer for \"424c2afc39c5aef3c23a866ae1af5b1e287e6f1a825a34fb49efed7e3fbb0d0f\" returns successfully"
	Aug 16 22:43:45 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:45.472555337Z" level=info msg="ExecSync for \"5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36\" with command [/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/var/lib/minikube/certs/etcd/ca.crt --cert=/var/lib/minikube/certs/etcd/healthcheck-client.crt --key=/var/lib/minikube/certs/etcd/healthcheck-client.key get foo] and timeout 15 (s)"
	Aug 16 22:43:45 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:45.645981133Z" level=info msg="Finish piping \"stderr\" of container exec \"73145ea1780a3f9b0339a69d0d67ac61b1682cc43431b53d69a7d83bb90823b5\""
	Aug 16 22:43:45 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:45.646858395Z" level=info msg="Finish piping \"stdout\" of container exec \"73145ea1780a3f9b0339a69d0d67ac61b1682cc43431b53d69a7d83bb90823b5\""
	Aug 16 22:43:45 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:45.647507032Z" level=info msg="Exec process \"73145ea1780a3f9b0339a69d0d67ac61b1682cc43431b53d69a7d83bb90823b5\" exits with exit code 0 and error <nil>"
	Aug 16 22:43:45 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:43:45.650882421Z" level=info msg="ExecSync for \"5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36\" returns with exit code 0"
	Aug 16 22:44:00 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:44:00.714773255Z" level=info msg="Finish piping stderr of container \"7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b\""
	Aug 16 22:44:00 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:44:00.715270451Z" level=info msg="Finish piping stdout of container \"7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b\""
	Aug 16 22:44:00 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:44:00.719966050Z" level=info msg="TaskExit event &TaskExit{ContainerID:7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b,ID:7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b,Pid:7404,ExitStatus:255,ExitedAt:2021-08-16 22:44:00.719190091 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:44:00 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:44:00.773223002Z" level=info msg="shim disconnected" id=7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b
	Aug 16 22:44:00 old-k8s-version-20210816223154-6986 containerd[2159]: time="2021-08-16T22:44:00.773581010Z" level=error msg="copy shim log" error="read /proc/self/fd/98: file already closed"
	
	* 
	* ==> coredns [6ea2a5a98c7784a7ad7702acc262ab5efb1fb3b94ee224698a6ebf4024bc79d9] <==
	* .:53
	2021-08-16T22:42:53.379Z [INFO] CoreDNS-1.3.1
	2021-08-16T22:42:53.380Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-08-16T22:42:53.380Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669
	[INFO] Reloading
	2021-08-16T22:43:27.937Z [INFO] plugin/reload: Running configuration MD5 = f060395823a948597d75c8d639586234
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.032614] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.912250] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1728 comm=systemd-network
	[  +0.736337] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.278439] vboxguest: loading out-of-tree module taints kernel.
	[  +0.011428] vboxguest: PCI device not found, probably running on physical hardware.
	[ +22.489450] systemd-fstab-generator[2070]: Ignoring "noauto" for root device
	[  +1.510829] systemd-fstab-generator[2104]: Ignoring "noauto" for root device
	[  +0.131636] systemd-fstab-generator[2117]: Ignoring "noauto" for root device
	[  +0.199221] systemd-fstab-generator[2148]: Ignoring "noauto" for root device
	[Aug16 22:37] systemd-fstab-generator[2341]: Ignoring "noauto" for root device
	[ +21.434266] kauditd_printk_skb: 20 callbacks suppressed
	[ +13.076129] kauditd_printk_skb: 116 callbacks suppressed
	[ +14.303690] kauditd_printk_skb: 20 callbacks suppressed
	[Aug16 22:38] kauditd_printk_skb: 5 callbacks suppressed
	[ +14.658718] kauditd_printk_skb: 50 callbacks suppressed
	[ +14.042008] NFSD: Unable to end grace period: -110
	[Aug16 22:42] systemd-fstab-generator[6208]: Ignoring "noauto" for root device
	[ +14.094438] tee (6649): /proc/6453/oom_adj is deprecated, please use /proc/6453/oom_score_adj instead.
	[ +15.834965] kauditd_printk_skb: 77 callbacks suppressed
	[  +6.216912] kauditd_printk_skb: 134 callbacks suppressed
	[Aug16 22:43] kauditd_printk_skb: 2 callbacks suppressed
	[ +13.987543] systemd-fstab-generator[8373]: Ignoring "noauto" for root device
	[  +0.814258] systemd-fstab-generator[8427]: Ignoring "noauto" for root device
	[  +1.051816] systemd-fstab-generator[8478]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [5d87176f403fd2d2f40cdf87cceb95bba3ac18f13feba9197f4fc526da979d36] <==
	* 2021-08-16 22:42:24.919421 I | etcdserver/membership: added member 27d7fcd40abd9523 [https://192.168.94.246:2380] to cluster c6e0109e79cd232
	2021-08-16 22:42:24.923170 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-16 22:42:24.924460 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-16 22:42:24.925117 I | embed: listening for metrics on http://192.168.94.246:2381
	2021-08-16 22:42:25.583010 I | raft: 27d7fcd40abd9523 is starting a new election at term 1
	2021-08-16 22:42:25.583189 I | raft: 27d7fcd40abd9523 became candidate at term 2
	2021-08-16 22:42:25.583232 I | raft: 27d7fcd40abd9523 received MsgVoteResp from 27d7fcd40abd9523 at term 2
	2021-08-16 22:42:25.583249 I | raft: 27d7fcd40abd9523 became leader at term 2
	2021-08-16 22:42:25.583260 I | raft: raft.node: 27d7fcd40abd9523 elected leader 27d7fcd40abd9523 at term 2
	2021-08-16 22:42:25.584018 I | etcdserver: published {Name:old-k8s-version-20210816223154-6986 ClientURLs:[https://192.168.94.246:2379]} to cluster c6e0109e79cd232
	2021-08-16 22:42:25.584145 I | embed: ready to serve client requests
	2021-08-16 22:42:25.585484 I | etcdserver: setting up the initial cluster version to 3.3
	2021-08-16 22:42:25.586252 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-16 22:42:25.587019 N | etcdserver/membership: set the initial cluster version to 3.3
	2021-08-16 22:42:25.599679 I | etcdserver/api: enabled capabilities for version 3.3
	2021-08-16 22:42:25.599962 I | embed: ready to serve client requests
	2021-08-16 22:42:25.609964 I | embed: serving client requests on 192.168.94.246:2379
	proto: no coders for int
	proto: no encoder for ValueSize int [GetProperties]
	2021-08-16 22:42:36.619908 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (127.850155ms) to execute
	2021-08-16 22:42:53.028951 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/metrics-server\" " with result "range_response_count:0 size:5" took too long (173.129099ms) to execute
	2021-08-16 22:42:53.086515 W | etcdserver: read-only range request "key:\"/registry/clusterroles/edit\" " with result "range_response_count:1 size:2542" took too long (117.835584ms) to execute
	2021-08-16 22:42:53.099996 W | etcdserver: read-only range request "key:\"/registry/clusterroles/view\" " with result "range_response_count:1 size:1333" took too long (125.717317ms) to execute
	2021-08-16 22:42:53.107685 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (202.636243ms) to execute
	2021-08-16 22:42:53.115431 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-8546d8b77b.169beab9869bd7a6\" " with result "range_response_count:1 size:602" took too long (205.644523ms) to execute
	
	* 
	* ==> kernel <==
	*  22:46:28 up 10 min,  0 users,  load average: 0.17, 0.47, 0.30
	Linux old-k8s-version-20210816223154-6986 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [7237baa217ae7218864322d6893eb52cef6f4f80ba58f4e4a0683b2bb5861a85] <==
	* W0816 22:46:26.784724       1 asm_amd64.s:1337] Failed to dial 127.0.0.1:2379: grpc: the connection is closing; please retry.
	I0816 22:46:26.784762       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	I0816 22:46:26.800336       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
	I0816 22:46:26.800473       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	I0816 22:46:26.800965       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	W0816 22:46:26.801414       1 asm_amd64.s:1337] Failed to dial 127.0.0.1:2379: grpc: the connection is closing; please retry.
	I0816 22:46:26.808398       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
	I0816 22:46:26.808530       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	I0816 22:46:26.808993       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	W0816 22:46:26.809411       1 asm_amd64.s:1337] Failed to dial 127.0.0.1:2379: grpc: the connection is closing; please retry.
	I0816 22:46:27.741102       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
	I0816 22:46:27.741238       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	I0816 22:46:27.741414       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	W0816 22:46:27.741496       1 asm_amd64.s:1337] Failed to dial 127.0.0.1:2379: grpc: the connection is closing; please retry.
	I0816 22:46:27.743213       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0816 22:46:27.743423       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0816 22:46:27.756069       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
	I0816 22:46:27.756191       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	I0816 22:46:27.756343       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
	W0816 22:46:27.756760       1 asm_amd64.s:1337] Failed to dial 127.0.0.1:2379: grpc: the connection is closing; please retry.
	E0816 22:46:28.395198       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}
	E0816 22:46:28.396530       1 writers.go:172] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:46:28.397861       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}
	I0816 22:46:28.398928       1 trace.go:81] Trace[1602742032]: "List /api/v1/nodes" (started: 2021-08-16 22:45:28.393424338 +0000 UTC m=+185.254219253) (total time: 1m0.005470315s):
	Trace[1602742032]: [1m0.005470315s] [1m0.005455592s] END
	
	* 
	* ==> kube-controller-manager [1a95abd0c58fec2e599b7fce2863703efdb6bdba4e9f4ead191033cb41d2b50d] <==
	* E0816 22:42:53.642035       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:53.642919       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"4b10e5a6-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"422", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:53.663073       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"4b1ad65f-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"423", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:42:53.687785       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:53.688209       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"4b10e5a6-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"422", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:42:53.693913       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:42:53.735371       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:53.735927       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"4b1ad65f-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:42:53.737937       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:53.738275       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"4b10e5a6-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"422", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:42:53.775198       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:53.775342       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"4b10e5a6-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"422", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:42:53.782439       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:53.782566       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"4b10e5a6-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"422", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:42:53.791273       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:53.791703       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"4b1ad65f-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:42:53.831190       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:53.831576       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"4b1ad65f-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:42:53.839990       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:53.840317       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"4b1ad65f-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:42:54.005057       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"4b10e5a6-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"422", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-5b494cc544-nznrc
	I0816 22:42:54.213180       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-8546d8b77b", UID:"4a9254f7-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"387", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-8546d8b77b-vpvp5
	I0816 22:42:54.883479       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"4b1ad65f-fee3-11eb-bea8-525400bf2371", APIVersion:"apps/v1", ResourceVersion:"430", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5d8978d65d-mblkl
	E0816 22:43:20.097686       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:43:22.664721       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [c56f64c3fe77c258bef4bbf05b666c6e0f0b07c60a576cf8f489af832a850b70] <==
	* W0816 22:42:51.084922       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0816 22:42:51.097742       1 server_others.go:148] Using iptables Proxier.
	I0816 22:42:51.098735       1 server_others.go:178] Tearing down inactive rules.
	E0816 22:42:51.210478       1 proxier.go:583] Error removing iptables rules in ipvs proxier: error deleting chain "KUBE-MARK-MASQ": exit status 1: iptables: Too many links.
	I0816 22:42:51.300207       1 server.go:555] Version: v1.14.0
	I0816 22:42:51.322439       1 config.go:202] Starting service config controller
	I0816 22:42:51.322700       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0816 22:42:51.324302       1 config.go:102] Starting endpoints config controller
	I0816 22:42:51.324529       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0816 22:42:51.422986       1 controller_utils.go:1034] Caches are synced for service config controller
	I0816 22:42:51.425124       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	
	* 
	* ==> kube-scheduler [1d8ecdfb3c614902b2c25a1e2ad2c17d24e512811fc8e5b8b99fd2bdc4e4179d] <==
	* W0816 22:42:24.914153       1 authentication.go:55] Authentication is disabled
	I0816 22:42:24.914179       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
	I0816 22:42:24.916859       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	E0816 22:42:29.482445       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:42:29.485780       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:42:29.486161       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:42:29.486661       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:42:29.486989       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:42:29.493926       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:42:29.494087       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:42:29.495550       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:42:29.497002       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:42:29.502073       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:42:30.485713       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:42:30.488902       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:42:30.494802       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:42:30.496082       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:42:30.499909       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:42:30.500544       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:42:30.502068       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:42:30.502973       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:42:30.504270       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:42:30.506877       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0816 22:42:32.321137       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0816 22:42:32.421433       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:36:28 UTC, end at Mon 2021-08-16 22:46:28 UTC. --
	Aug 16 22:42:54 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:42:54.823340    6216 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:42:54 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:42:54.823394    6216 pod_workers.go:190] Error syncing pod 4b7525e3-fee3-11eb-bea8-525400bf2371 ("metrics-server-8546d8b77b-vpvp5_kube-system(4b7525e3-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Aug 16 22:42:54 old-k8s-version-20210816223154-6986 kubelet[6216]: I0816 22:42:54.986126    6216 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/4bdd50b5-fee3-11eb-bea8-525400bf2371-tmp-volume") pod "kubernetes-dashboard-5d8978d65d-mblkl" (UID: "4bdd50b5-fee3-11eb-bea8-525400bf2371")
	Aug 16 22:42:54 old-k8s-version-20210816223154-6986 kubelet[6216]: I0816 22:42:54.986343    6216 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-dllwm" (UniqueName: "kubernetes.io/secret/4bdd50b5-fee3-11eb-bea8-525400bf2371-kubernetes-dashboard-token-dllwm") pod "kubernetes-dashboard-5d8978d65d-mblkl" (UID: "4bdd50b5-fee3-11eb-bea8-525400bf2371")
	Aug 16 22:42:55 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:42:55.071945    6216 pod_workers.go:190] Error syncing pod 4b7525e3-fee3-11eb-bea8-525400bf2371 ("metrics-server-8546d8b77b-vpvp5_kube-system(4b7525e3-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:43:03 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:03.328438    6216 pod_workers.go:190] Error syncing pod 4b4f43e1-fee3-11eb-bea8-525400bf2371 ("dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"
	Aug 16 22:43:04 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:04.313833    6216 pod_workers.go:190] Error syncing pod 4b4f43e1-fee3-11eb-bea8-525400bf2371 ("dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"
	Aug 16 22:43:05 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:05.317443    6216 pod_workers.go:190] Error syncing pod 4b4f43e1-fee3-11eb-bea8-525400bf2371 ("dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"
	Aug 16 22:43:06 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:06.777480    6216 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:43:06 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:06.777711    6216 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:43:06 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:06.777911    6216 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:43:06 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:06.777960    6216 pod_workers.go:190] Error syncing pod 4b7525e3-fee3-11eb-bea8-525400bf2371 ("metrics-server-8546d8b77b-vpvp5_kube-system(4b7525e3-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Aug 16 22:43:18 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:18.362406    6216 pod_workers.go:190] Error syncing pod 4b4f43e1-fee3-11eb-bea8-525400bf2371 ("dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"
	Aug 16 22:43:18 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:18.767919    6216 pod_workers.go:190] Error syncing pod 4b7525e3-fee3-11eb-bea8-525400bf2371 ("metrics-server-8546d8b77b-vpvp5_kube-system(4b7525e3-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:43:24 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:24.338079    6216 pod_workers.go:190] Error syncing pod 4b4f43e1-fee3-11eb-bea8-525400bf2371 ("dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"
	Aug 16 22:43:30 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:30.780007    6216 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:43:30 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:30.781232    6216 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:43:30 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:30.781840    6216 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:43:30 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:30.782016    6216 pod_workers.go:190] Error syncing pod 4b7525e3-fee3-11eb-bea8-525400bf2371 ("metrics-server-8546d8b77b-vpvp5_kube-system(4b7525e3-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Aug 16 22:43:40 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:40.445275    6216 pod_workers.go:190] Error syncing pod 4b4f43e1-fee3-11eb-bea8-525400bf2371 ("dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"
	Aug 16 22:43:43 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:43.774125    6216 pod_workers.go:190] Error syncing pod 4b7525e3-fee3-11eb-bea8-525400bf2371 ("metrics-server-8546d8b77b-vpvp5_kube-system(4b7525e3-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:43:44 old-k8s-version-20210816223154-6986 kubelet[6216]: E0816 22:43:44.338559    6216 pod_workers.go:190] Error syncing pod 4b4f43e1-fee3-11eb-bea8-525400bf2371 ("dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-nznrc_kubernetes-dashboard(4b4f43e1-fee3-11eb-bea8-525400bf2371)"
	Aug 16 22:43:48 old-k8s-version-20210816223154-6986 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:43:48 old-k8s-version-20210816223154-6986 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:43:48 old-k8s-version-20210816223154-6986 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [e30d5daacc87f690bc273d129f168487c51648dd2e6bc8d0870a0848215e02e9] <==
	* 2021/08/16 22:42:56 Using namespace: kubernetes-dashboard
	2021/08/16 22:42:56 Using in-cluster config to connect to apiserver
	2021/08/16 22:42:56 Using secret token for csrf signing
	2021/08/16 22:42:56 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/16 22:42:56 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/16 22:42:56 Successful initial request to the apiserver, version: v1.14.0
	2021/08/16 22:42:56 Generating JWE encryption key
	2021/08/16 22:42:56 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/16 22:42:56 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/16 22:42:56 Initializing JWE encryption key from synchronized object
	2021/08/16 22:42:56 Creating in-cluster Sidecar client
	2021/08/16 22:42:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:42:56 Serving insecurely on HTTP port: 9090
	2021/08/16 22:43:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:42:56 Starting overwatch
	
	* 
	* ==> storage-provisioner [7be9f927a71c2d23028ae295036da4b559f17ffbc65e47488a685a486957738b] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 92 [sync.Cond.Wait, 1 minutes]:
	sync.runtime_notifyListWait(0xc0000d2350, 0xc000000003)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc0000d2340)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc0000945a0, 0x0, 0x0, 0x0)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc0003b2c80, 0x18e5530, 0xc00042a7c0, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0003fde40)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0003fde40, 0x18b3d60, 0xc0004264e0, 0x17a0e01, 0xc000102300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0003fde40, 0x3b9aca00, 0x0, 0xc000095801, 0xc000102300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc0003fde40, 0x3b9aca00, 0xc000102300)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:46:28.398964   21068 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: "\n** stderr ** \nError from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (167.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (106.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20210816223333-6986 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-20210816223333-6986 --alsologtostderr -v=1: exit status 80 (2.401967979s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-20210816223333-6986 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 22:43:55.314429   20238 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:43:55.314625   20238 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:43:55.314636   20238 out.go:311] Setting ErrFile to fd 2...
	I0816 22:43:55.314640   20238 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:43:55.314743   20238 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:43:55.314891   20238 out.go:305] Setting JSON to false
	I0816 22:43:55.314909   20238 mustload.go:65] Loading cluster: embed-certs-20210816223333-6986
	I0816 22:43:55.315207   20238 config.go:177] Loaded profile config "embed-certs-20210816223333-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:43:55.315543   20238 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:55.315593   20238 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:55.327219   20238 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34805
	I0816 22:43:55.327622   20238 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:55.328144   20238 main.go:130] libmachine: Using API Version  1
	I0816 22:43:55.328168   20238 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:55.328572   20238 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:55.328738   20238 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetState
	I0816 22:43:55.331960   20238 host.go:66] Checking if "embed-certs-20210816223333-6986" exists ...
	I0816 22:43:55.332364   20238 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:55.332402   20238 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:55.342755   20238 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41803
	I0816 22:43:55.343200   20238 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:55.343626   20238 main.go:130] libmachine: Using API Version  1
	I0816 22:43:55.343645   20238 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:55.344037   20238 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:55.344214   20238 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:43:55.344860   20238 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-20210816223333-6986 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0816 22:43:55.347230   20238 out.go:177] * Pausing node embed-certs-20210816223333-6986 ... 
	I0816 22:43:55.347258   20238 host.go:66] Checking if "embed-certs-20210816223333-6986" exists ...
	I0816 22:43:55.347571   20238 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:55.347615   20238 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:55.358766   20238 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33221
	I0816 22:43:55.359186   20238 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:55.359645   20238 main.go:130] libmachine: Using API Version  1
	I0816 22:43:55.359666   20238 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:55.360026   20238 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:55.360198   20238 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:43:55.360444   20238 ssh_runner.go:149] Run: systemctl --version
	I0816 22:43:55.360485   20238 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:43:55.366073   20238 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:55.366563   20238 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:43:55.366605   20238 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:55.366670   20238 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:43:55.366819   20238 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:43:55.366964   20238 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:43:55.367086   20238 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:43:55.461832   20238 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:43:55.475533   20238 pause.go:50] kubelet running: true
	I0816 22:43:55.475595   20238 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:43:55.730594   20238 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:43:55.730686   20238 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:43:55.869948   20238 cri.go:76] found id: "14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356"
	I0816 22:43:55.869973   20238 cri.go:76] found id: "acfa6acd622519fbbdfbc3cc49cca060cd8efae7eb741a5d9b0b773af45869f9"
	I0816 22:43:55.869978   20238 cri.go:76] found id: "d2b4592df27b12f825742f24ddd825a80ddb1dc22ba0ff1044a6d255e91c1480"
	I0816 22:43:55.869982   20238 cri.go:76] found id: "5252b10be7bb737ec2020248c745eda7772c84f779b0b3c55c946d8a978aff12"
	I0816 22:43:55.869985   20238 cri.go:76] found id: "6d8fa8177082e340822efb9f6f8c7938b9c592b339aeb928ced35b94554c0f5c"
	I0816 22:43:55.869989   20238 cri.go:76] found id: "00de61027a57cd55c5b0d43848cb3703f370bafd18f004a20bbd4b30f766c2f8"
	I0816 22:43:55.869993   20238 cri.go:76] found id: "3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522"
	I0816 22:43:55.869996   20238 cri.go:76] found id: "1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18"
	I0816 22:43:55.869999   20238 cri.go:76] found id: "654de8c4789a875b3548d819c9b56cb6cb50e2799fcbb2b1e67efa6ea8278322"
	I0816 22:43:55.870005   20238 cri.go:76] found id: ""
	I0816 22:43:55.870050   20238 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0816 22:43:55.924872   20238 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"00de61027a57cd55c5b0d43848cb3703f370bafd18f004a20bbd4b30f766c2f8","pid":5734,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/00de61027a57cd55c5b0d43848cb3703f370bafd18f004a20bbd4b30f766c2f8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/00de61027a57cd55c5b0d43848cb3703f370bafd18f004a20bbd4b30f766c2f8/rootfs","created":"2021-08-16T22:43:14.27470469Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3a8fef0311637c83fb416dbb1aef8d50687102077f4cc79f29069d5e7d9571a8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0d72b213127cdbfe36e7d0b7d9293e8b21a09f8cad6b9fb3ff71d98d3e5f2c29","pid":6528,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d72b213127cdbfe36e7d0b7d9293e8b21a09f8cad6b9fb3ff71d98d3e5f2c29","rootfs":"/run/containerd/io.contai
nerd.runtime.v2.task/k8s.io/0d72b213127cdbfe36e7d0b7d9293e8b21a09f8cad6b9fb3ff71d98d3e5f2c29/rootfs","created":"2021-08-16T22:43:40.18846409Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0d72b213127cdbfe36e7d0b7d9293e8b21a09f8cad6b9fb3ff71d98d3e5f2c29","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-qfrpw_abb75357-7b33-4327-aa7f-8e9c15a192f8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356","pid":6686,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356/rootfs","created":"2021-08-16T22:43:41.51578432Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernete
s.cri.sandbox-id":"762745540814d44f9818192f4231a792eac639d8555f0e44560cb7cfa0a304d9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1cd3b203f0f522cc79f0711a48ab21a8985eb676b0afb5a18059225755bea1a0","pid":5625,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1cd3b203f0f522cc79f0711a48ab21a8985eb676b0afb5a18059225755bea1a0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1cd3b203f0f522cc79f0711a48ab21a8985eb676b0afb5a18059225755bea1a0/rootfs","created":"2021-08-16T22:43:13.445186357Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1cd3b203f0f522cc79f0711a48ab21a8985eb676b0afb5a18059225755bea1a0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-embed-certs-20210816223333-6986_339023db77960ebb780280934f821008"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522","pid":5699,"status":"running","bundle":"/run/containerd/io.cont
ainerd.runtime.v2.task/k8s.io/3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522/rootfs","created":"2021-08-16T22:43:14.15251317Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e78080cd3b11841e768934d750433b0cfce146c83cc970c2abbcbb0ef3ce0b69"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a8fef0311637c83fb416dbb1aef8d50687102077f4cc79f29069d5e7d9571a8","pid":5636,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a8fef0311637c83fb416dbb1aef8d50687102077f4cc79f29069d5e7d9571a8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a8fef0311637c83fb416dbb1aef8d50687102077f4cc79f29069d5e7d9571a8/rootfs","created":"2021-08-16T22:43:13.452354295Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3a8
fef0311637c83fb416dbb1aef8d50687102077f4cc79f29069d5e7d9571a8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-20210816223333-6986_422293a16536c2eaadc36e5f34f8a60c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3b689050c0847484ccb7645f1c71238fa222ce2da509e4b7c9396ed6c4a2ac56","pid":5618,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b689050c0847484ccb7645f1c71238fa222ce2da509e4b7c9396ed6c4a2ac56","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b689050c0847484ccb7645f1c71238fa222ce2da509e4b7c9396ed6c4a2ac56/rootfs","created":"2021-08-16T22:43:13.449367367Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3b689050c0847484ccb7645f1c71238fa222ce2da509e4b7c9396ed6c4a2ac56","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-embed-certs-20210816223333-6986_bb6da460428b943b0fa1cd077b50be82"},"owner":"root"},{"ociVersion":"1.0.2-dev
","id":"5252b10be7bb737ec2020248c745eda7772c84f779b0b3c55c946d8a978aff12","pid":5812,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5252b10be7bb737ec2020248c745eda7772c84f779b0b3c55c946d8a978aff12","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5252b10be7bb737ec2020248c745eda7772c84f779b0b3c55c946d8a978aff12/rootfs","created":"2021-08-16T22:43:14.716403137Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3b689050c0847484ccb7645f1c71238fa222ce2da509e4b7c9396ed6c4a2ac56"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"654de8c4789a875b3548d819c9b56cb6cb50e2799fcbb2b1e67efa6ea8278322","pid":6841,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/654de8c4789a875b3548d819c9b56cb6cb50e2799fcbb2b1e67efa6ea8278322","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/654de8c4789a875b3548d819c9b56cb6cb50e2799fcbb2b1e67efa6ea827832
2/rootfs","created":"2021-08-16T22:43:42.488016828Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"bba06897412e67cd12b6ae3e8fb73b92f4ddcd2287f60ac024d984f45f17941b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6d8fa8177082e340822efb9f6f8c7938b9c592b339aeb928ced35b94554c0f5c","pid":5785,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d8fa8177082e340822efb9f6f8c7938b9c592b339aeb928ced35b94554c0f5c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d8fa8177082e340822efb9f6f8c7938b9c592b339aeb928ced35b94554c0f5c/rootfs","created":"2021-08-16T22:43:14.599551512Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1cd3b203f0f522cc79f0711a48ab21a8985eb676b0afb5a18059225755bea1a0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"73de0351bc3ba9190a758110ce31a25d612febb
8a72bd854bf515b02fe155a12","pid":6290,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73de0351bc3ba9190a758110ce31a25d612febb8a72bd854bf515b02fe155a12","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73de0351bc3ba9190a758110ce31a25d612febb8a72bd854bf515b02fe155a12/rootfs","created":"2021-08-16T22:43:37.318392838Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"73de0351bc3ba9190a758110ce31a25d612febb8a72bd854bf515b02fe155a12","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-mfshm_cb9ac226-b63f-4de1-b4af-b8e2bf280d95"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"762745540814d44f9818192f4231a792eac639d8555f0e44560cb7cfa0a304d9","pid":6441,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/762745540814d44f9818192f4231a792eac639d8555f0e44560cb7cfa0a304d9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/762745540814d44f9818192f4231a792
eac639d8555f0e44560cb7cfa0a304d9/rootfs","created":"2021-08-16T22:43:40.159165318Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"762745540814d44f9818192f4231a792eac639d8555f0e44560cb7cfa0a304d9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_f3fc0038-f88e-416f-81e3-fb387b0e010a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"acfa6acd622519fbbdfbc3cc49cca060cd8efae7eb741a5d9b0b773af45869f9","pid":6384,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/acfa6acd622519fbbdfbc3cc49cca060cd8efae7eb741a5d9b0b773af45869f9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/acfa6acd622519fbbdfbc3cc49cca060cd8efae7eb741a5d9b0b773af45869f9/rootfs","created":"2021-08-16T22:43:38.009471936Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"73de0351bc3ba9190a758110ce31a25d612febb8a72bd854bf515b02fe155a1
2"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b72889a5e9646897cc52367c89b8c9e64eb06e5c76cf26ec26ef04a83733c5a9","pid":6118,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b72889a5e9646897cc52367c89b8c9e64eb06e5c76cf26ec26ef04a83733c5a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b72889a5e9646897cc52367c89b8c9e64eb06e5c76cf26ec26ef04a83733c5a9/rootfs","created":"2021-08-16T22:43:36.578590148Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b72889a5e9646897cc52367c89b8c9e64eb06e5c76cf26ec26ef04a83733c5a9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-zwcwz_f85562a3-8576-4dbf-a2b2-3f6a3d199df3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bba06897412e67cd12b6ae3e8fb73b92f4ddcd2287f60ac024d984f45f17941b","pid":6805,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bba06897412e67cd12b6ae3e8fb73b92f4ddcd2287f60ac024d984f45f17941b","rootfs":"/r
un/containerd/io.containerd.runtime.v2.task/k8s.io/bba06897412e67cd12b6ae3e8fb73b92f4ddcd2287f60ac024d984f45f17941b/rootfs","created":"2021-08-16T22:43:41.965363221Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bba06897412e67cd12b6ae3e8fb73b92f4ddcd2287f60ac024d984f45f17941b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-6fqh5_1375c060-0e73-47e5-b599-6d7e58617b31"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b","pid":6753,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b/rootfs","created":"2021-08-16T22:43:41.713747391Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"c
c84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-b5fzd_2e56d6c3-4bfc-46cc-b263-0b26d0941122"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d2b4592df27b12f825742f24ddd825a80ddb1dc22ba0ff1044a6d255e91c1480","pid":6200,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d2b4592df27b12f825742f24ddd825a80ddb1dc22ba0ff1044a6d255e91c1480","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d2b4592df27b12f825742f24ddd825a80ddb1dc22ba0ff1044a6d255e91c1480/rootfs","created":"2021-08-16T22:43:36.938482813Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b72889a5e9646897cc52367c89b8c9e64eb06e5c76cf26ec26ef04a83733c5a9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e78080cd3b11841e768934d750433b0cfce146c83cc970c2abbcbb0ef3ce0b69","pid":5604,"status":"run
ning","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e78080cd3b11841e768934d750433b0cfce146c83cc970c2abbcbb0ef3ce0b69","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e78080cd3b11841e768934d750433b0cfce146c83cc970c2abbcbb0ef3ce0b69/rootfs","created":"2021-08-16T22:43:13.383635568Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e78080cd3b11841e768934d750433b0cfce146c83cc970c2abbcbb0ef3ce0b69","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-embed-certs-20210816223333-6986_071058c7a3a75a94b15ff3cc58be11b4"},"owner":"root"}]
	I0816 22:43:55.925097   20238 cri.go:113] list returned 18 containers
	I0816 22:43:55.925112   20238 cri.go:116] container: {ID:00de61027a57cd55c5b0d43848cb3703f370bafd18f004a20bbd4b30f766c2f8 Status:running}
	I0816 22:43:55.925150   20238 cri.go:116] container: {ID:0d72b213127cdbfe36e7d0b7d9293e8b21a09f8cad6b9fb3ff71d98d3e5f2c29 Status:running}
	I0816 22:43:55.925158   20238 cri.go:118] skipping 0d72b213127cdbfe36e7d0b7d9293e8b21a09f8cad6b9fb3ff71d98d3e5f2c29 - not in ps
	I0816 22:43:55.925166   20238 cri.go:116] container: {ID:14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356 Status:running}
	I0816 22:43:55.925174   20238 cri.go:116] container: {ID:1cd3b203f0f522cc79f0711a48ab21a8985eb676b0afb5a18059225755bea1a0 Status:running}
	I0816 22:43:55.925183   20238 cri.go:118] skipping 1cd3b203f0f522cc79f0711a48ab21a8985eb676b0afb5a18059225755bea1a0 - not in ps
	I0816 22:43:55.925189   20238 cri.go:116] container: {ID:3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522 Status:running}
	I0816 22:43:55.925196   20238 cri.go:116] container: {ID:3a8fef0311637c83fb416dbb1aef8d50687102077f4cc79f29069d5e7d9571a8 Status:running}
	I0816 22:43:55.925204   20238 cri.go:118] skipping 3a8fef0311637c83fb416dbb1aef8d50687102077f4cc79f29069d5e7d9571a8 - not in ps
	I0816 22:43:55.925210   20238 cri.go:116] container: {ID:3b689050c0847484ccb7645f1c71238fa222ce2da509e4b7c9396ed6c4a2ac56 Status:running}
	I0816 22:43:55.925217   20238 cri.go:118] skipping 3b689050c0847484ccb7645f1c71238fa222ce2da509e4b7c9396ed6c4a2ac56 - not in ps
	I0816 22:43:55.925225   20238 cri.go:116] container: {ID:5252b10be7bb737ec2020248c745eda7772c84f779b0b3c55c946d8a978aff12 Status:running}
	I0816 22:43:55.925231   20238 cri.go:116] container: {ID:654de8c4789a875b3548d819c9b56cb6cb50e2799fcbb2b1e67efa6ea8278322 Status:running}
	I0816 22:43:55.925239   20238 cri.go:116] container: {ID:6d8fa8177082e340822efb9f6f8c7938b9c592b339aeb928ced35b94554c0f5c Status:running}
	I0816 22:43:55.925245   20238 cri.go:116] container: {ID:73de0351bc3ba9190a758110ce31a25d612febb8a72bd854bf515b02fe155a12 Status:running}
	I0816 22:43:55.925253   20238 cri.go:118] skipping 73de0351bc3ba9190a758110ce31a25d612febb8a72bd854bf515b02fe155a12 - not in ps
	I0816 22:43:55.925263   20238 cri.go:116] container: {ID:762745540814d44f9818192f4231a792eac639d8555f0e44560cb7cfa0a304d9 Status:running}
	I0816 22:43:55.925273   20238 cri.go:118] skipping 762745540814d44f9818192f4231a792eac639d8555f0e44560cb7cfa0a304d9 - not in ps
	I0816 22:43:55.925279   20238 cri.go:116] container: {ID:acfa6acd622519fbbdfbc3cc49cca060cd8efae7eb741a5d9b0b773af45869f9 Status:running}
	I0816 22:43:55.925285   20238 cri.go:116] container: {ID:b72889a5e9646897cc52367c89b8c9e64eb06e5c76cf26ec26ef04a83733c5a9 Status:running}
	I0816 22:43:55.925293   20238 cri.go:118] skipping b72889a5e9646897cc52367c89b8c9e64eb06e5c76cf26ec26ef04a83733c5a9 - not in ps
	I0816 22:43:55.925298   20238 cri.go:116] container: {ID:bba06897412e67cd12b6ae3e8fb73b92f4ddcd2287f60ac024d984f45f17941b Status:running}
	I0816 22:43:55.925306   20238 cri.go:118] skipping bba06897412e67cd12b6ae3e8fb73b92f4ddcd2287f60ac024d984f45f17941b - not in ps
	I0816 22:43:55.925311   20238 cri.go:116] container: {ID:cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b Status:running}
	I0816 22:43:55.925318   20238 cri.go:118] skipping cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b - not in ps
	I0816 22:43:55.925324   20238 cri.go:116] container: {ID:d2b4592df27b12f825742f24ddd825a80ddb1dc22ba0ff1044a6d255e91c1480 Status:running}
	I0816 22:43:55.925331   20238 cri.go:116] container: {ID:e78080cd3b11841e768934d750433b0cfce146c83cc970c2abbcbb0ef3ce0b69 Status:running}
	I0816 22:43:55.925338   20238 cri.go:118] skipping e78080cd3b11841e768934d750433b0cfce146c83cc970c2abbcbb0ef3ce0b69 - not in ps
	I0816 22:43:55.925384   20238 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 00de61027a57cd55c5b0d43848cb3703f370bafd18f004a20bbd4b30f766c2f8
	I0816 22:43:55.948894   20238 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 00de61027a57cd55c5b0d43848cb3703f370bafd18f004a20bbd4b30f766c2f8 14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356
	I0816 22:43:55.978208   20238 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 00de61027a57cd55c5b0d43848cb3703f370bafd18f004a20bbd4b30f766c2f8 14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:43:55Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0816 22:43:56.254657   20238 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:43:56.265937   20238 pause.go:50] kubelet running: false
	I0816 22:43:56.266015   20238 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:43:56.470724   20238 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:43:56.470824   20238 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:43:56.597049   20238 cri.go:76] found id: "14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356"
	I0816 22:43:56.597077   20238 cri.go:76] found id: "acfa6acd622519fbbdfbc3cc49cca060cd8efae7eb741a5d9b0b773af45869f9"
	I0816 22:43:56.597084   20238 cri.go:76] found id: "d2b4592df27b12f825742f24ddd825a80ddb1dc22ba0ff1044a6d255e91c1480"
	I0816 22:43:56.597089   20238 cri.go:76] found id: "5252b10be7bb737ec2020248c745eda7772c84f779b0b3c55c946d8a978aff12"
	I0816 22:43:56.597092   20238 cri.go:76] found id: "6d8fa8177082e340822efb9f6f8c7938b9c592b339aeb928ced35b94554c0f5c"
	I0816 22:43:56.597096   20238 cri.go:76] found id: "00de61027a57cd55c5b0d43848cb3703f370bafd18f004a20bbd4b30f766c2f8"
	I0816 22:43:56.597099   20238 cri.go:76] found id: "3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522"
	I0816 22:43:56.597102   20238 cri.go:76] found id: "1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18"
	I0816 22:43:56.597106   20238 cri.go:76] found id: "654de8c4789a875b3548d819c9b56cb6cb50e2799fcbb2b1e67efa6ea8278322"
	I0816 22:43:56.597112   20238 cri.go:76] found id: ""
	I0816 22:43:56.597149   20238 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0816 22:43:56.650096   20238 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"00de61027a57cd55c5b0d43848cb3703f370bafd18f004a20bbd4b30f766c2f8","pid":5734,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/00de61027a57cd55c5b0d43848cb3703f370bafd18f004a20bbd4b30f766c2f8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/00de61027a57cd55c5b0d43848cb3703f370bafd18f004a20bbd4b30f766c2f8/rootfs","created":"2021-08-16T22:43:14.27470469Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3a8fef0311637c83fb416dbb1aef8d50687102077f4cc79f29069d5e7d9571a8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0d72b213127cdbfe36e7d0b7d9293e8b21a09f8cad6b9fb3ff71d98d3e5f2c29","pid":6528,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d72b213127cdbfe36e7d0b7d9293e8b21a09f8cad6b9fb3ff71d98d3e5f2c29","rootfs":"/run/containerd/io.contain
erd.runtime.v2.task/k8s.io/0d72b213127cdbfe36e7d0b7d9293e8b21a09f8cad6b9fb3ff71d98d3e5f2c29/rootfs","created":"2021-08-16T22:43:40.18846409Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0d72b213127cdbfe36e7d0b7d9293e8b21a09f8cad6b9fb3ff71d98d3e5f2c29","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-qfrpw_abb75357-7b33-4327-aa7f-8e9c15a192f8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356","pid":6686,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356/rootfs","created":"2021-08-16T22:43:41.51578432Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes
.cri.sandbox-id":"762745540814d44f9818192f4231a792eac639d8555f0e44560cb7cfa0a304d9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1cd3b203f0f522cc79f0711a48ab21a8985eb676b0afb5a18059225755bea1a0","pid":5625,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1cd3b203f0f522cc79f0711a48ab21a8985eb676b0afb5a18059225755bea1a0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1cd3b203f0f522cc79f0711a48ab21a8985eb676b0afb5a18059225755bea1a0/rootfs","created":"2021-08-16T22:43:13.445186357Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1cd3b203f0f522cc79f0711a48ab21a8985eb676b0afb5a18059225755bea1a0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-embed-certs-20210816223333-6986_339023db77960ebb780280934f821008"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522","pid":5699,"status":"running","bundle":"/run/containerd/io.conta
inerd.runtime.v2.task/k8s.io/3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522/rootfs","created":"2021-08-16T22:43:14.15251317Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e78080cd3b11841e768934d750433b0cfce146c83cc970c2abbcbb0ef3ce0b69"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a8fef0311637c83fb416dbb1aef8d50687102077f4cc79f29069d5e7d9571a8","pid":5636,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a8fef0311637c83fb416dbb1aef8d50687102077f4cc79f29069d5e7d9571a8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a8fef0311637c83fb416dbb1aef8d50687102077f4cc79f29069d5e7d9571a8/rootfs","created":"2021-08-16T22:43:13.452354295Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3a8f
ef0311637c83fb416dbb1aef8d50687102077f4cc79f29069d5e7d9571a8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-20210816223333-6986_422293a16536c2eaadc36e5f34f8a60c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3b689050c0847484ccb7645f1c71238fa222ce2da509e4b7c9396ed6c4a2ac56","pid":5618,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b689050c0847484ccb7645f1c71238fa222ce2da509e4b7c9396ed6c4a2ac56","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b689050c0847484ccb7645f1c71238fa222ce2da509e4b7c9396ed6c4a2ac56/rootfs","created":"2021-08-16T22:43:13.449367367Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3b689050c0847484ccb7645f1c71238fa222ce2da509e4b7c9396ed6c4a2ac56","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-embed-certs-20210816223333-6986_bb6da460428b943b0fa1cd077b50be82"},"owner":"root"},{"ociVersion":"1.0.2-dev"
,"id":"5252b10be7bb737ec2020248c745eda7772c84f779b0b3c55c946d8a978aff12","pid":5812,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5252b10be7bb737ec2020248c745eda7772c84f779b0b3c55c946d8a978aff12","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5252b10be7bb737ec2020248c745eda7772c84f779b0b3c55c946d8a978aff12/rootfs","created":"2021-08-16T22:43:14.716403137Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3b689050c0847484ccb7645f1c71238fa222ce2da509e4b7c9396ed6c4a2ac56"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"654de8c4789a875b3548d819c9b56cb6cb50e2799fcbb2b1e67efa6ea8278322","pid":6841,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/654de8c4789a875b3548d819c9b56cb6cb50e2799fcbb2b1e67efa6ea8278322","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/654de8c4789a875b3548d819c9b56cb6cb50e2799fcbb2b1e67efa6ea8278322
/rootfs","created":"2021-08-16T22:43:42.488016828Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"bba06897412e67cd12b6ae3e8fb73b92f4ddcd2287f60ac024d984f45f17941b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6d8fa8177082e340822efb9f6f8c7938b9c592b339aeb928ced35b94554c0f5c","pid":5785,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d8fa8177082e340822efb9f6f8c7938b9c592b339aeb928ced35b94554c0f5c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d8fa8177082e340822efb9f6f8c7938b9c592b339aeb928ced35b94554c0f5c/rootfs","created":"2021-08-16T22:43:14.599551512Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1cd3b203f0f522cc79f0711a48ab21a8985eb676b0afb5a18059225755bea1a0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"73de0351bc3ba9190a758110ce31a25d612febb8
a72bd854bf515b02fe155a12","pid":6290,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73de0351bc3ba9190a758110ce31a25d612febb8a72bd854bf515b02fe155a12","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73de0351bc3ba9190a758110ce31a25d612febb8a72bd854bf515b02fe155a12/rootfs","created":"2021-08-16T22:43:37.318392838Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"73de0351bc3ba9190a758110ce31a25d612febb8a72bd854bf515b02fe155a12","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-mfshm_cb9ac226-b63f-4de1-b4af-b8e2bf280d95"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"762745540814d44f9818192f4231a792eac639d8555f0e44560cb7cfa0a304d9","pid":6441,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/762745540814d44f9818192f4231a792eac639d8555f0e44560cb7cfa0a304d9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/762745540814d44f9818192f4231a792e
ac639d8555f0e44560cb7cfa0a304d9/rootfs","created":"2021-08-16T22:43:40.159165318Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"762745540814d44f9818192f4231a792eac639d8555f0e44560cb7cfa0a304d9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_f3fc0038-f88e-416f-81e3-fb387b0e010a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"acfa6acd622519fbbdfbc3cc49cca060cd8efae7eb741a5d9b0b773af45869f9","pid":6384,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/acfa6acd622519fbbdfbc3cc49cca060cd8efae7eb741a5d9b0b773af45869f9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/acfa6acd622519fbbdfbc3cc49cca060cd8efae7eb741a5d9b0b773af45869f9/rootfs","created":"2021-08-16T22:43:38.009471936Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"73de0351bc3ba9190a758110ce31a25d612febb8a72bd854bf515b02fe155a12
"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b72889a5e9646897cc52367c89b8c9e64eb06e5c76cf26ec26ef04a83733c5a9","pid":6118,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b72889a5e9646897cc52367c89b8c9e64eb06e5c76cf26ec26ef04a83733c5a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b72889a5e9646897cc52367c89b8c9e64eb06e5c76cf26ec26ef04a83733c5a9/rootfs","created":"2021-08-16T22:43:36.578590148Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b72889a5e9646897cc52367c89b8c9e64eb06e5c76cf26ec26ef04a83733c5a9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-zwcwz_f85562a3-8576-4dbf-a2b2-3f6a3d199df3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bba06897412e67cd12b6ae3e8fb73b92f4ddcd2287f60ac024d984f45f17941b","pid":6805,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bba06897412e67cd12b6ae3e8fb73b92f4ddcd2287f60ac024d984f45f17941b","rootfs":"/ru
n/containerd/io.containerd.runtime.v2.task/k8s.io/bba06897412e67cd12b6ae3e8fb73b92f4ddcd2287f60ac024d984f45f17941b/rootfs","created":"2021-08-16T22:43:41.965363221Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bba06897412e67cd12b6ae3e8fb73b92f4ddcd2287f60ac024d984f45f17941b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-6fqh5_1375c060-0e73-47e5-b599-6d7e58617b31"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b","pid":6753,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b/rootfs","created":"2021-08-16T22:43:41.713747391Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cc
84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-b5fzd_2e56d6c3-4bfc-46cc-b263-0b26d0941122"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d2b4592df27b12f825742f24ddd825a80ddb1dc22ba0ff1044a6d255e91c1480","pid":6200,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d2b4592df27b12f825742f24ddd825a80ddb1dc22ba0ff1044a6d255e91c1480","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d2b4592df27b12f825742f24ddd825a80ddb1dc22ba0ff1044a6d255e91c1480/rootfs","created":"2021-08-16T22:43:36.938482813Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b72889a5e9646897cc52367c89b8c9e64eb06e5c76cf26ec26ef04a83733c5a9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e78080cd3b11841e768934d750433b0cfce146c83cc970c2abbcbb0ef3ce0b69","pid":5604,"status":"runn
ing","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e78080cd3b11841e768934d750433b0cfce146c83cc970c2abbcbb0ef3ce0b69","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e78080cd3b11841e768934d750433b0cfce146c83cc970c2abbcbb0ef3ce0b69/rootfs","created":"2021-08-16T22:43:13.383635568Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e78080cd3b11841e768934d750433b0cfce146c83cc970c2abbcbb0ef3ce0b69","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-embed-certs-20210816223333-6986_071058c7a3a75a94b15ff3cc58be11b4"},"owner":"root"}]
	I0816 22:43:56.650442   20238 cri.go:113] list returned 18 containers
	I0816 22:43:56.650460   20238 cri.go:116] container: {ID:00de61027a57cd55c5b0d43848cb3703f370bafd18f004a20bbd4b30f766c2f8 Status:paused}
	I0816 22:43:56.650476   20238 cri.go:122] skipping {00de61027a57cd55c5b0d43848cb3703f370bafd18f004a20bbd4b30f766c2f8 paused}: state = "paused", want "running"
	I0816 22:43:56.650489   20238 cri.go:116] container: {ID:0d72b213127cdbfe36e7d0b7d9293e8b21a09f8cad6b9fb3ff71d98d3e5f2c29 Status:running}
	I0816 22:43:56.650497   20238 cri.go:118] skipping 0d72b213127cdbfe36e7d0b7d9293e8b21a09f8cad6b9fb3ff71d98d3e5f2c29 - not in ps
	I0816 22:43:56.650505   20238 cri.go:116] container: {ID:14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356 Status:running}
	I0816 22:43:56.650511   20238 cri.go:116] container: {ID:1cd3b203f0f522cc79f0711a48ab21a8985eb676b0afb5a18059225755bea1a0 Status:running}
	I0816 22:43:56.650519   20238 cri.go:118] skipping 1cd3b203f0f522cc79f0711a48ab21a8985eb676b0afb5a18059225755bea1a0 - not in ps
	I0816 22:43:56.650533   20238 cri.go:116] container: {ID:3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522 Status:running}
	I0816 22:43:56.650540   20238 cri.go:116] container: {ID:3a8fef0311637c83fb416dbb1aef8d50687102077f4cc79f29069d5e7d9571a8 Status:running}
	I0816 22:43:56.650547   20238 cri.go:118] skipping 3a8fef0311637c83fb416dbb1aef8d50687102077f4cc79f29069d5e7d9571a8 - not in ps
	I0816 22:43:56.650553   20238 cri.go:116] container: {ID:3b689050c0847484ccb7645f1c71238fa222ce2da509e4b7c9396ed6c4a2ac56 Status:running}
	I0816 22:43:56.650560   20238 cri.go:118] skipping 3b689050c0847484ccb7645f1c71238fa222ce2da509e4b7c9396ed6c4a2ac56 - not in ps
	I0816 22:43:56.650568   20238 cri.go:116] container: {ID:5252b10be7bb737ec2020248c745eda7772c84f779b0b3c55c946d8a978aff12 Status:running}
	I0816 22:43:56.650575   20238 cri.go:116] container: {ID:654de8c4789a875b3548d819c9b56cb6cb50e2799fcbb2b1e67efa6ea8278322 Status:running}
	I0816 22:43:56.650584   20238 cri.go:116] container: {ID:6d8fa8177082e340822efb9f6f8c7938b9c592b339aeb928ced35b94554c0f5c Status:running}
	I0816 22:43:56.650591   20238 cri.go:116] container: {ID:73de0351bc3ba9190a758110ce31a25d612febb8a72bd854bf515b02fe155a12 Status:running}
	I0816 22:43:56.650601   20238 cri.go:118] skipping 73de0351bc3ba9190a758110ce31a25d612febb8a72bd854bf515b02fe155a12 - not in ps
	I0816 22:43:56.650607   20238 cri.go:116] container: {ID:762745540814d44f9818192f4231a792eac639d8555f0e44560cb7cfa0a304d9 Status:running}
	I0816 22:43:56.650615   20238 cri.go:118] skipping 762745540814d44f9818192f4231a792eac639d8555f0e44560cb7cfa0a304d9 - not in ps
	I0816 22:43:56.650620   20238 cri.go:116] container: {ID:acfa6acd622519fbbdfbc3cc49cca060cd8efae7eb741a5d9b0b773af45869f9 Status:running}
	I0816 22:43:56.650628   20238 cri.go:116] container: {ID:b72889a5e9646897cc52367c89b8c9e64eb06e5c76cf26ec26ef04a83733c5a9 Status:running}
	I0816 22:43:56.650647   20238 cri.go:118] skipping b72889a5e9646897cc52367c89b8c9e64eb06e5c76cf26ec26ef04a83733c5a9 - not in ps
	I0816 22:43:56.650660   20238 cri.go:116] container: {ID:bba06897412e67cd12b6ae3e8fb73b92f4ddcd2287f60ac024d984f45f17941b Status:running}
	I0816 22:43:56.650667   20238 cri.go:118] skipping bba06897412e67cd12b6ae3e8fb73b92f4ddcd2287f60ac024d984f45f17941b - not in ps
	I0816 22:43:56.650673   20238 cri.go:116] container: {ID:cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b Status:running}
	I0816 22:43:56.650683   20238 cri.go:118] skipping cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b - not in ps
	I0816 22:43:56.650689   20238 cri.go:116] container: {ID:d2b4592df27b12f825742f24ddd825a80ddb1dc22ba0ff1044a6d255e91c1480 Status:running}
	I0816 22:43:56.650698   20238 cri.go:116] container: {ID:e78080cd3b11841e768934d750433b0cfce146c83cc970c2abbcbb0ef3ce0b69 Status:running}
	I0816 22:43:56.650705   20238 cri.go:118] skipping e78080cd3b11841e768934d750433b0cfce146c83cc970c2abbcbb0ef3ce0b69 - not in ps
	I0816 22:43:56.650787   20238 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356
	I0816 22:43:56.669964   20238 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356 3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522
	I0816 22:43:56.689076   20238 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356 3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:43:56Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0816 22:43:57.229813   20238 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:43:57.243499   20238 pause.go:50] kubelet running: false
	I0816 22:43:57.243559   20238 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:43:57.434644   20238 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:43:57.434748   20238 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:43:57.556582   20238 cri.go:76] found id: "14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356"
	I0816 22:43:57.556614   20238 cri.go:76] found id: "acfa6acd622519fbbdfbc3cc49cca060cd8efae7eb741a5d9b0b773af45869f9"
	I0816 22:43:57.556622   20238 cri.go:76] found id: "d2b4592df27b12f825742f24ddd825a80ddb1dc22ba0ff1044a6d255e91c1480"
	I0816 22:43:57.556628   20238 cri.go:76] found id: "5252b10be7bb737ec2020248c745eda7772c84f779b0b3c55c946d8a978aff12"
	I0816 22:43:57.556633   20238 cri.go:76] found id: "6d8fa8177082e340822efb9f6f8c7938b9c592b339aeb928ced35b94554c0f5c"
	I0816 22:43:57.556639   20238 cri.go:76] found id: "00de61027a57cd55c5b0d43848cb3703f370bafd18f004a20bbd4b30f766c2f8"
	I0816 22:43:57.556645   20238 cri.go:76] found id: "3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522"
	I0816 22:43:57.556650   20238 cri.go:76] found id: "1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18"
	I0816 22:43:57.556655   20238 cri.go:76] found id: "654de8c4789a875b3548d819c9b56cb6cb50e2799fcbb2b1e67efa6ea8278322"
	I0816 22:43:57.556664   20238 cri.go:76] found id: ""
	I0816 22:43:57.556727   20238 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0816 22:43:57.600727   20238 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"00de61027a57cd55c5b0d43848cb3703f370bafd18f004a20bbd4b30f766c2f8","pid":5734,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/00de61027a57cd55c5b0d43848cb3703f370bafd18f004a20bbd4b30f766c2f8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/00de61027a57cd55c5b0d43848cb3703f370bafd18f004a20bbd4b30f766c2f8/rootfs","created":"2021-08-16T22:43:14.27470469Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3a8fef0311637c83fb416dbb1aef8d50687102077f4cc79f29069d5e7d9571a8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0d72b213127cdbfe36e7d0b7d9293e8b21a09f8cad6b9fb3ff71d98d3e5f2c29","pid":6528,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d72b213127cdbfe36e7d0b7d9293e8b21a09f8cad6b9fb3ff71d98d3e5f2c29","rootfs":"/run/containerd/io.contain
erd.runtime.v2.task/k8s.io/0d72b213127cdbfe36e7d0b7d9293e8b21a09f8cad6b9fb3ff71d98d3e5f2c29/rootfs","created":"2021-08-16T22:43:40.18846409Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0d72b213127cdbfe36e7d0b7d9293e8b21a09f8cad6b9fb3ff71d98d3e5f2c29","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-qfrpw_abb75357-7b33-4327-aa7f-8e9c15a192f8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356","pid":6686,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356/rootfs","created":"2021-08-16T22:43:41.51578432Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.
cri.sandbox-id":"762745540814d44f9818192f4231a792eac639d8555f0e44560cb7cfa0a304d9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1cd3b203f0f522cc79f0711a48ab21a8985eb676b0afb5a18059225755bea1a0","pid":5625,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1cd3b203f0f522cc79f0711a48ab21a8985eb676b0afb5a18059225755bea1a0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1cd3b203f0f522cc79f0711a48ab21a8985eb676b0afb5a18059225755bea1a0/rootfs","created":"2021-08-16T22:43:13.445186357Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1cd3b203f0f522cc79f0711a48ab21a8985eb676b0afb5a18059225755bea1a0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-embed-certs-20210816223333-6986_339023db77960ebb780280934f821008"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522","pid":5699,"status":"running","bundle":"/run/containerd/io.contai
nerd.runtime.v2.task/k8s.io/3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522/rootfs","created":"2021-08-16T22:43:14.15251317Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e78080cd3b11841e768934d750433b0cfce146c83cc970c2abbcbb0ef3ce0b69"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a8fef0311637c83fb416dbb1aef8d50687102077f4cc79f29069d5e7d9571a8","pid":5636,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a8fef0311637c83fb416dbb1aef8d50687102077f4cc79f29069d5e7d9571a8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a8fef0311637c83fb416dbb1aef8d50687102077f4cc79f29069d5e7d9571a8/rootfs","created":"2021-08-16T22:43:13.452354295Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3a8fe
f0311637c83fb416dbb1aef8d50687102077f4cc79f29069d5e7d9571a8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-embed-certs-20210816223333-6986_422293a16536c2eaadc36e5f34f8a60c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3b689050c0847484ccb7645f1c71238fa222ce2da509e4b7c9396ed6c4a2ac56","pid":5618,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b689050c0847484ccb7645f1c71238fa222ce2da509e4b7c9396ed6c4a2ac56","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b689050c0847484ccb7645f1c71238fa222ce2da509e4b7c9396ed6c4a2ac56/rootfs","created":"2021-08-16T22:43:13.449367367Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3b689050c0847484ccb7645f1c71238fa222ce2da509e4b7c9396ed6c4a2ac56","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-embed-certs-20210816223333-6986_bb6da460428b943b0fa1cd077b50be82"},"owner":"root"},{"ociVersion":"1.0.2-dev",
"id":"5252b10be7bb737ec2020248c745eda7772c84f779b0b3c55c946d8a978aff12","pid":5812,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5252b10be7bb737ec2020248c745eda7772c84f779b0b3c55c946d8a978aff12","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5252b10be7bb737ec2020248c745eda7772c84f779b0b3c55c946d8a978aff12/rootfs","created":"2021-08-16T22:43:14.716403137Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3b689050c0847484ccb7645f1c71238fa222ce2da509e4b7c9396ed6c4a2ac56"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"654de8c4789a875b3548d819c9b56cb6cb50e2799fcbb2b1e67efa6ea8278322","pid":6841,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/654de8c4789a875b3548d819c9b56cb6cb50e2799fcbb2b1e67efa6ea8278322","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/654de8c4789a875b3548d819c9b56cb6cb50e2799fcbb2b1e67efa6ea8278322/
rootfs","created":"2021-08-16T22:43:42.488016828Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"bba06897412e67cd12b6ae3e8fb73b92f4ddcd2287f60ac024d984f45f17941b"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"6d8fa8177082e340822efb9f6f8c7938b9c592b339aeb928ced35b94554c0f5c","pid":5785,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d8fa8177082e340822efb9f6f8c7938b9c592b339aeb928ced35b94554c0f5c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d8fa8177082e340822efb9f6f8c7938b9c592b339aeb928ced35b94554c0f5c/rootfs","created":"2021-08-16T22:43:14.599551512Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1cd3b203f0f522cc79f0711a48ab21a8985eb676b0afb5a18059225755bea1a0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"73de0351bc3ba9190a758110ce31a25d612febb8a
72bd854bf515b02fe155a12","pid":6290,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73de0351bc3ba9190a758110ce31a25d612febb8a72bd854bf515b02fe155a12","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/73de0351bc3ba9190a758110ce31a25d612febb8a72bd854bf515b02fe155a12/rootfs","created":"2021-08-16T22:43:37.318392838Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"73de0351bc3ba9190a758110ce31a25d612febb8a72bd854bf515b02fe155a12","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-mfshm_cb9ac226-b63f-4de1-b4af-b8e2bf280d95"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"762745540814d44f9818192f4231a792eac639d8555f0e44560cb7cfa0a304d9","pid":6441,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/762745540814d44f9818192f4231a792eac639d8555f0e44560cb7cfa0a304d9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/762745540814d44f9818192f4231a792ea
c639d8555f0e44560cb7cfa0a304d9/rootfs","created":"2021-08-16T22:43:40.159165318Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"762745540814d44f9818192f4231a792eac639d8555f0e44560cb7cfa0a304d9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_f3fc0038-f88e-416f-81e3-fb387b0e010a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"acfa6acd622519fbbdfbc3cc49cca060cd8efae7eb741a5d9b0b773af45869f9","pid":6384,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/acfa6acd622519fbbdfbc3cc49cca060cd8efae7eb741a5d9b0b773af45869f9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/acfa6acd622519fbbdfbc3cc49cca060cd8efae7eb741a5d9b0b773af45869f9/rootfs","created":"2021-08-16T22:43:38.009471936Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"73de0351bc3ba9190a758110ce31a25d612febb8a72bd854bf515b02fe155a12"
},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b72889a5e9646897cc52367c89b8c9e64eb06e5c76cf26ec26ef04a83733c5a9","pid":6118,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b72889a5e9646897cc52367c89b8c9e64eb06e5c76cf26ec26ef04a83733c5a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b72889a5e9646897cc52367c89b8c9e64eb06e5c76cf26ec26ef04a83733c5a9/rootfs","created":"2021-08-16T22:43:36.578590148Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"b72889a5e9646897cc52367c89b8c9e64eb06e5c76cf26ec26ef04a83733c5a9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-zwcwz_f85562a3-8576-4dbf-a2b2-3f6a3d199df3"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"bba06897412e67cd12b6ae3e8fb73b92f4ddcd2287f60ac024d984f45f17941b","pid":6805,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bba06897412e67cd12b6ae3e8fb73b92f4ddcd2287f60ac024d984f45f17941b","rootfs":"/run
/containerd/io.containerd.runtime.v2.task/k8s.io/bba06897412e67cd12b6ae3e8fb73b92f4ddcd2287f60ac024d984f45f17941b/rootfs","created":"2021-08-16T22:43:41.965363221Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"bba06897412e67cd12b6ae3e8fb73b92f4ddcd2287f60ac024d984f45f17941b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-6fqh5_1375c060-0e73-47e5-b599-6d7e58617b31"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b","pid":6753,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b/rootfs","created":"2021-08-16T22:43:41.713747391Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cc8
4c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-b5fzd_2e56d6c3-4bfc-46cc-b263-0b26d0941122"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"d2b4592df27b12f825742f24ddd825a80ddb1dc22ba0ff1044a6d255e91c1480","pid":6200,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d2b4592df27b12f825742f24ddd825a80ddb1dc22ba0ff1044a6d255e91c1480","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d2b4592df27b12f825742f24ddd825a80ddb1dc22ba0ff1044a6d255e91c1480/rootfs","created":"2021-08-16T22:43:36.938482813Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"b72889a5e9646897cc52367c89b8c9e64eb06e5c76cf26ec26ef04a83733c5a9"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e78080cd3b11841e768934d750433b0cfce146c83cc970c2abbcbb0ef3ce0b69","pid":5604,"status":"runni
ng","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e78080cd3b11841e768934d750433b0cfce146c83cc970c2abbcbb0ef3ce0b69","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e78080cd3b11841e768934d750433b0cfce146c83cc970c2abbcbb0ef3ce0b69/rootfs","created":"2021-08-16T22:43:13.383635568Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e78080cd3b11841e768934d750433b0cfce146c83cc970c2abbcbb0ef3ce0b69","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-embed-certs-20210816223333-6986_071058c7a3a75a94b15ff3cc58be11b4"},"owner":"root"}]
	I0816 22:43:57.600935   20238 cri.go:113] list returned 18 containers
	I0816 22:43:57.600952   20238 cri.go:116] container: {ID:00de61027a57cd55c5b0d43848cb3703f370bafd18f004a20bbd4b30f766c2f8 Status:paused}
	I0816 22:43:57.600964   20238 cri.go:122] skipping {00de61027a57cd55c5b0d43848cb3703f370bafd18f004a20bbd4b30f766c2f8 paused}: state = "paused", want "running"
	I0816 22:43:57.600976   20238 cri.go:116] container: {ID:0d72b213127cdbfe36e7d0b7d9293e8b21a09f8cad6b9fb3ff71d98d3e5f2c29 Status:running}
	I0816 22:43:57.600987   20238 cri.go:118] skipping 0d72b213127cdbfe36e7d0b7d9293e8b21a09f8cad6b9fb3ff71d98d3e5f2c29 - not in ps
	I0816 22:43:57.600992   20238 cri.go:116] container: {ID:14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356 Status:paused}
	I0816 22:43:57.601004   20238 cri.go:122] skipping {14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356 paused}: state = "paused", want "running"
	I0816 22:43:57.601014   20238 cri.go:116] container: {ID:1cd3b203f0f522cc79f0711a48ab21a8985eb676b0afb5a18059225755bea1a0 Status:running}
	I0816 22:43:57.601020   20238 cri.go:118] skipping 1cd3b203f0f522cc79f0711a48ab21a8985eb676b0afb5a18059225755bea1a0 - not in ps
	I0816 22:43:57.601027   20238 cri.go:116] container: {ID:3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522 Status:running}
	I0816 22:43:57.601032   20238 cri.go:116] container: {ID:3a8fef0311637c83fb416dbb1aef8d50687102077f4cc79f29069d5e7d9571a8 Status:running}
	I0816 22:43:57.601040   20238 cri.go:118] skipping 3a8fef0311637c83fb416dbb1aef8d50687102077f4cc79f29069d5e7d9571a8 - not in ps
	I0816 22:43:57.601043   20238 cri.go:116] container: {ID:3b689050c0847484ccb7645f1c71238fa222ce2da509e4b7c9396ed6c4a2ac56 Status:running}
	I0816 22:43:57.601047   20238 cri.go:118] skipping 3b689050c0847484ccb7645f1c71238fa222ce2da509e4b7c9396ed6c4a2ac56 - not in ps
	I0816 22:43:57.601051   20238 cri.go:116] container: {ID:5252b10be7bb737ec2020248c745eda7772c84f779b0b3c55c946d8a978aff12 Status:running}
	I0816 22:43:57.601054   20238 cri.go:116] container: {ID:654de8c4789a875b3548d819c9b56cb6cb50e2799fcbb2b1e67efa6ea8278322 Status:running}
	I0816 22:43:57.601059   20238 cri.go:116] container: {ID:6d8fa8177082e340822efb9f6f8c7938b9c592b339aeb928ced35b94554c0f5c Status:running}
	I0816 22:43:57.601063   20238 cri.go:116] container: {ID:73de0351bc3ba9190a758110ce31a25d612febb8a72bd854bf515b02fe155a12 Status:running}
	I0816 22:43:57.601070   20238 cri.go:118] skipping 73de0351bc3ba9190a758110ce31a25d612febb8a72bd854bf515b02fe155a12 - not in ps
	I0816 22:43:57.601079   20238 cri.go:116] container: {ID:762745540814d44f9818192f4231a792eac639d8555f0e44560cb7cfa0a304d9 Status:running}
	I0816 22:43:57.601089   20238 cri.go:118] skipping 762745540814d44f9818192f4231a792eac639d8555f0e44560cb7cfa0a304d9 - not in ps
	I0816 22:43:57.601094   20238 cri.go:116] container: {ID:acfa6acd622519fbbdfbc3cc49cca060cd8efae7eb741a5d9b0b773af45869f9 Status:running}
	I0816 22:43:57.601105   20238 cri.go:116] container: {ID:b72889a5e9646897cc52367c89b8c9e64eb06e5c76cf26ec26ef04a83733c5a9 Status:running}
	I0816 22:43:57.601111   20238 cri.go:118] skipping b72889a5e9646897cc52367c89b8c9e64eb06e5c76cf26ec26ef04a83733c5a9 - not in ps
	I0816 22:43:57.601120   20238 cri.go:116] container: {ID:bba06897412e67cd12b6ae3e8fb73b92f4ddcd2287f60ac024d984f45f17941b Status:running}
	I0816 22:43:57.601126   20238 cri.go:118] skipping bba06897412e67cd12b6ae3e8fb73b92f4ddcd2287f60ac024d984f45f17941b - not in ps
	I0816 22:43:57.601136   20238 cri.go:116] container: {ID:cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b Status:running}
	I0816 22:43:57.601144   20238 cri.go:118] skipping cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b - not in ps
	I0816 22:43:57.601151   20238 cri.go:116] container: {ID:d2b4592df27b12f825742f24ddd825a80ddb1dc22ba0ff1044a6d255e91c1480 Status:running}
	I0816 22:43:57.601155   20238 cri.go:116] container: {ID:e78080cd3b11841e768934d750433b0cfce146c83cc970c2abbcbb0ef3ce0b69 Status:running}
	I0816 22:43:57.601159   20238 cri.go:118] skipping e78080cd3b11841e768934d750433b0cfce146c83cc970c2abbcbb0ef3ce0b69 - not in ps
	I0816 22:43:57.601205   20238 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522
	I0816 22:43:57.624207   20238 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522 5252b10be7bb737ec2020248c745eda7772c84f779b0b3c55c946d8a978aff12
	I0816 22:43:57.649249   20238 out.go:177] 
	W0816 22:43:57.649402   20238 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522 5252b10be7bb737ec2020248c745eda7772c84f779b0b3c55c946d8a978aff12: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:43:57Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522 5252b10be7bb737ec2020248c745eda7772c84f779b0b3c55c946d8a978aff12: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:43:57Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0816 22:43:57.649418   20238 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0816 22:43:57.652034   20238 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0816 22:43:57.653503   20238 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p embed-certs-20210816223333-6986 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210816223333-6986 -n embed-certs-20210816223333-6986

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210816223333-6986 -n embed-certs-20210816223333-6986: exit status 2 (14.427604174s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:44:12.086332   20319 status.go:422] Error apiserver status: https://192.168.105.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20210816223333-6986 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p embed-certs-20210816223333-6986 logs -n 25: exit status 110 (12.83469761s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                | disable-driver-mounts-20210816223418-6986      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:18 UTC | Mon, 16 Aug 2021 22:34:18 UTC |
	|         | disable-driver-mounts-20210816223418-6986         |                                                |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:31:54 UTC | Mon, 16 Aug 2021 22:34:34 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:44 UTC | Mon, 16 Aug 2021 22:34:45 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816223156-6986                 | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:31:56 UTC | Mon, 16 Aug 2021 22:35:04 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:33:33 UTC | Mon, 16 Aug 2021 22:35:08 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:16 UTC | Mon, 16 Aug 2021 22:35:17 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:19 UTC | Mon, 16 Aug 2021 22:35:20 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:18 UTC | Mon, 16 Aug 2021 22:35:42 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:51 UTC | Mon, 16 Aug 2021 22:35:52 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:45 UTC | Mon, 16 Aug 2021 22:36:18 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:18 UTC | Mon, 16 Aug 2021 22:36:18 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:17 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:20 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:52 UTC | Mon, 16 Aug 2021 22:37:25 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:37:25 UTC | Mon, 16 Aug 2021 22:37:25 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816223156-6986                 | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:42:31 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:42:42 UTC | Mon, 16 Aug 2021 22:42:42 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:18 UTC | Mon, 16 Aug 2021 22:43:37 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:43:44 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:43:47 UTC | Mon, 16 Aug 2021 22:43:47 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:37:25 UTC | Mon, 16 Aug 2021 22:43:52 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:43:55 UTC | Mon, 16 Aug 2021 22:43:55 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:44:03 UTC | Mon, 16 Aug 2021 22:44:03 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:37:25
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:37:25.306577   19204 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:37:25.306653   19204 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:37:25.306656   19204 out.go:311] Setting ErrFile to fd 2...
	I0816 22:37:25.306663   19204 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:37:25.307072   19204 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:37:25.307547   19204 out.go:305] Setting JSON to false
	I0816 22:37:25.351342   19204 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":4807,"bootTime":1629148638,"procs":188,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:37:25.351461   19204 start.go:121] virtualization: kvm guest
	I0816 22:37:25.353955   19204 out.go:177] * [default-k8s-different-port-20210816223418-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:37:25.355393   19204 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:37:25.354127   19204 notify.go:169] Checking for updates...
	I0816 22:37:25.356781   19204 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:37:25.358158   19204 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:37:25.364678   19204 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:37:25.365267   19204 config.go:177] Loaded profile config "default-k8s-different-port-20210816223418-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:37:25.365899   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:37:25.365956   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:37:25.381650   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46493
	I0816 22:37:25.382065   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:37:25.382798   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:37:25.382820   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:37:25.383330   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:37:25.383519   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:25.383721   19204 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:37:25.384192   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:37:25.384260   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:37:25.401082   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44899
	I0816 22:37:25.402507   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:37:25.403115   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:37:25.403179   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:37:25.403663   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:37:25.403903   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:25.439751   19204 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 22:37:25.439781   19204 start.go:278] selected driver: kvm2
	I0816 22:37:25.439788   19204 start.go:751] validating driver "kvm2" against &{Name:default-k8s-different-port-20210816223418-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210816223418-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kube
let:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:37:25.439905   19204 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 22:37:25.441282   19204 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:37:25.441453   19204 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 22:37:25.455762   19204 install.go:137] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2 version is 1.22.0
	I0816 22:37:25.456183   19204 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 22:37:25.456219   19204 cni.go:93] Creating CNI manager for ""
	I0816 22:37:25.456234   19204 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:37:25.456245   19204 start_flags.go:277] config:
	{Name:default-k8s-different-port-20210816223418-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-2021
0816223418-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop
:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:37:25.456384   19204 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:37:25.458420   19204 out.go:177] * Starting control plane node default-k8s-different-port-20210816223418-6986 in cluster default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.458447   19204 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:37:25.458480   19204 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0816 22:37:25.458495   19204 cache.go:56] Caching tarball of preloaded images
	I0816 22:37:25.458602   19204 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 22:37:25.458622   19204 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0816 22:37:25.458779   19204 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/config.json ...
	I0816 22:37:25.459003   19204 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:37:25.459033   19204 start.go:313] acquiring machines lock for default-k8s-different-port-20210816223418-6986: {Name:mk808edd60d1305a42bb85791729eff4573dbb15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 22:37:25.459101   19204 start.go:317] acquired machines lock for "default-k8s-different-port-20210816223418-6986" in 48.071µs
	I0816 22:37:25.459123   19204 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:37:25.459131   19204 fix.go:55] fixHost starting: 
	I0816 22:37:25.459569   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:37:25.459614   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:37:25.473634   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44349
	I0816 22:37:25.474153   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:37:25.474765   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:37:25.474786   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:37:25.475205   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:37:25.475409   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:25.475621   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetState
	I0816 22:37:25.479447   19204 fix.go:108] recreateIfNeeded on default-k8s-different-port-20210816223418-6986: state=Stopped err=<nil>
	I0816 22:37:25.479498   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	W0816 22:37:25.479660   19204 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:37:21.322104   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:21.822129   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:22.321669   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:22.821492   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:23.322452   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:23.822419   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:24.322141   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:24.821615   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:24.856062   18923 api_server.go:70] duration metric: took 8.045517198s to wait for apiserver process to appear ...
	I0816 22:37:24.856091   18923 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:37:24.856103   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:24.856734   18923 api_server.go:255] stopped: https://192.168.116.66:8443/healthz: Get "https://192.168.116.66:8443/healthz": dial tcp 192.168.116.66:8443: connect: connection refused
	I0816 22:37:25.357442   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:22.382628   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Getting to WaitForSSH function...
	I0816 22:37:22.388062   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:22.388472   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:22.388501   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:22.388736   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Using SSH client type: external
	I0816 22:37:22.388774   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa (-rw-------)
	I0816 22:37:22.388825   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.105.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:37:22.388851   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | About to run SSH command:
	I0816 22:37:22.388868   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | exit 0
	I0816 22:37:23.527862   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | SSH cmd err, output: <nil>: 
	I0816 22:37:23.528297   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetConfigRaw
	I0816 22:37:23.529175   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetIP
	I0816 22:37:23.535445   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.535831   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.535862   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.536325   18929 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/config.json ...
	I0816 22:37:23.536603   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:23.536838   18929 machine.go:88] provisioning docker machine ...
	I0816 22:37:23.536860   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:23.537120   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetMachineName
	I0816 22:37:23.537298   18929 buildroot.go:166] provisioning hostname "embed-certs-20210816223333-6986"
	I0816 22:37:23.537328   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetMachineName
	I0816 22:37:23.537497   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:23.543084   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.543520   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.543560   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.543770   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:23.543953   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:23.544122   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:23.544284   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:23.544470   18929 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:23.544676   18929 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.105.129 22 <nil> <nil>}
	I0816 22:37:23.544698   18929 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20210816223333-6986 && echo "embed-certs-20210816223333-6986" | sudo tee /etc/hostname
	I0816 22:37:23.682935   18929 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20210816223333-6986
	
	I0816 22:37:23.682982   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:23.689555   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.690034   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.690071   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.690297   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:23.690526   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:23.690738   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:23.690910   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:23.691116   18929 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:23.691321   18929 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.105.129 22 <nil> <nil>}
	I0816 22:37:23.691351   18929 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20210816223333-6986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210816223333-6986/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20210816223333-6986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:37:23.826330   18929 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:37:23.826357   18929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:37:23.826393   18929 buildroot.go:174] setting up certificates
	I0816 22:37:23.826403   18929 provision.go:83] configureAuth start
	I0816 22:37:23.826415   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetMachineName
	I0816 22:37:23.826673   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetIP
	I0816 22:37:23.832833   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.833221   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.833252   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.833505   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:23.839058   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.839437   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.839468   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.839721   18929 provision.go:138] copyHostCerts
	I0816 22:37:23.839785   18929 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:37:23.839801   18929 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:37:23.839858   18929 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:37:23.840010   18929 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:37:23.840023   18929 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:37:23.840050   18929 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:37:23.840148   18929 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:37:23.840160   18929 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:37:23.840181   18929 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
	I0816 22:37:23.840251   18929 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20210816223333-6986 san=[192.168.105.129 192.168.105.129 localhost 127.0.0.1 minikube embed-certs-20210816223333-6986]
	I0816 22:37:24.071276   18929 provision.go:172] copyRemoteCerts
	I0816 22:37:24.071347   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:37:24.071383   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.077584   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.078065   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.078133   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.078307   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.078500   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.078636   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.078743   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:37:24.168996   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:37:24.190581   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0816 22:37:24.211894   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:37:24.234970   18929 provision.go:86] duration metric: configureAuth took 408.533613ms
	I0816 22:37:24.235001   18929 buildroot.go:189] setting minikube options for container-runtime
	I0816 22:37:24.235282   18929 config.go:177] Loaded profile config "embed-certs-20210816223333-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:37:24.235303   18929 machine.go:91] provisioned docker machine in 698.450664ms
	I0816 22:37:24.235313   18929 start.go:267] post-start starting for "embed-certs-20210816223333-6986" (driver="kvm2")
	I0816 22:37:24.235321   18929 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:37:24.235352   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.235711   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:37:24.235748   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.242219   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.242647   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.242677   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.242968   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.243197   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.243376   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.243542   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:37:24.342244   18929 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:37:24.348430   18929 info.go:137] Remote host: Buildroot 2020.02.12
	I0816 22:37:24.348458   18929 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:37:24.348527   18929 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:37:24.348678   18929 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0816 22:37:24.348794   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:37:24.358370   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:37:24.378832   18929 start.go:270] post-start completed in 143.493882ms
	I0816 22:37:24.378891   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.379183   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.385172   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.385565   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.385596   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.385720   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.385936   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.386069   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.386238   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.386404   18929 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:24.386604   18929 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.105.129 22 <nil> <nil>}
	I0816 22:37:24.386621   18929 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0816 22:37:24.513150   18929 main.go:130] libmachine: SSH cmd err, output: <nil>: 1629153444.435910196
	
	I0816 22:37:24.513175   18929 fix.go:212] guest clock: 1629153444.435910196
	I0816 22:37:24.513185   18929 fix.go:225] Guest: 2021-08-16 22:37:24.435910196 +0000 UTC Remote: 2021-08-16 22:37:24.379164096 +0000 UTC m=+28.470229855 (delta=56.7461ms)
	I0816 22:37:24.513209   18929 fix.go:196] guest clock delta is within tolerance: 56.7461ms
	I0816 22:37:24.513220   18929 fix.go:57] fixHost completed within 14.813246061s
	I0816 22:37:24.513226   18929 start.go:80] releasing machines lock for "embed-certs-20210816223333-6986", held for 14.813280431s
	I0816 22:37:24.513267   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.513532   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetIP
	I0816 22:37:24.519703   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.520118   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.520149   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.520319   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.520528   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.521062   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.521300   18929 ssh_runner.go:149] Run: systemctl --version
	I0816 22:37:24.521326   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.521364   18929 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:37:24.521406   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.527844   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.527923   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.528257   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.528281   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.528308   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.528323   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.528556   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.528678   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.528724   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.528933   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.528943   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.529108   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.529179   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:37:24.529267   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:37:24.634682   18929 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:37:24.634891   18929 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:37:24.131199   18635 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:37:24.131267   18635 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:37:24.140028   18635 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:37:24.157600   18635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:37:24.171359   18635 system_pods.go:59] 8 kube-system pods found
	I0816 22:37:24.171398   18635 system_pods.go:61] "coredns-fb8b8dccf-qwcrg" [fd98f945-fee1-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171407   18635 system_pods.go:61] "etcd-old-k8s-version-20210816223154-6986" [1d77612e-fee2-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171414   18635 system_pods.go:61] "kube-apiserver-old-k8s-version-20210816223154-6986" [152107a2-fee2-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171420   18635 system_pods.go:61] "kube-controller-manager-old-k8s-version-20210816223154-6986" [8620a0da-fee2-11eb-b5b6-525400bf2371] Pending
	I0816 22:37:24.171426   18635 system_pods.go:61] "kube-proxy-nvb2s" [fdaa2b42-fee1-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171438   18635 system_pods.go:61] "kube-scheduler-old-k8s-version-20210816223154-6986" [1b1505e6-fee2-11eb-bb5b-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:37:24.171454   18635 system_pods.go:61] "metrics-server-8546d8b77b-gl6jr" [28801d4e-fee2-11eb-bb5b-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:37:24.171462   18635 system_pods.go:61] "storage-provisioner" [ff1e11f1-fee1-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171469   18635 system_pods.go:74] duration metric: took 13.840978ms to wait for pod list to return data ...
	I0816 22:37:24.171481   18635 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:37:24.176303   18635 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:37:24.176347   18635 node_conditions.go:123] node cpu capacity is 2
	I0816 22:37:24.176360   18635 node_conditions.go:105] duration metric: took 4.872863ms to run NodePressure ...
	I0816 22:37:24.176376   18635 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:25.292041   18635 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.115642082s)
	I0816 22:37:25.292077   18635 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:37:25.325547   18635 kubeadm.go:746] kubelet initialised
	I0816 22:37:25.325574   18635 kubeadm.go:747] duration metric: took 33.485813ms waiting for restarted kubelet to initialise ...
	I0816 22:37:25.325590   18635 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:37:25.351142   18635 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:27.387702   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:25.482074   19204 out.go:177] * Restarting existing kvm2 VM for "default-k8s-different-port-20210816223418-6986" ...
	I0816 22:37:25.482104   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Start
	I0816 22:37:25.482316   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Ensuring networks are active...
	I0816 22:37:25.484598   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Ensuring network default is active
	I0816 22:37:25.485014   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Ensuring network mk-default-k8s-different-port-20210816223418-6986 is active
	I0816 22:37:25.485452   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Getting domain xml...
	I0816 22:37:25.487765   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Creating domain...
	I0816 22:37:25.923048   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Waiting to get IP...
	I0816 22:37:25.924065   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.924660   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Found IP for machine: 192.168.50.186
	I0816 22:37:25.924682   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Reserving static IP address...
	I0816 22:37:25.924701   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has current primary IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.925155   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "default-k8s-different-port-20210816223418-6986", mac: "52:54:00:ed:d4:05", ip: "192.168.50.186"} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:34:34 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:25.925187   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | skip adding static IP to network mk-default-k8s-different-port-20210816223418-6986 - found existing host DHCP lease matching {name: "default-k8s-different-port-20210816223418-6986", mac: "52:54:00:ed:d4:05", ip: "192.168.50.186"}
	I0816 22:37:25.925202   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Reserved static IP address: 192.168.50.186
	I0816 22:37:25.925219   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Waiting for SSH to be available...
	I0816 22:37:25.925234   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Getting to WaitForSSH function...
	I0816 22:37:25.930369   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.930660   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:34:34 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:25.930705   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.930802   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Using SSH client type: external
	I0816 22:37:25.930842   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa (-rw-------)
	I0816 22:37:25.930888   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:37:25.931010   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | About to run SSH command:
	I0816 22:37:25.931033   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | exit 0
	I0816 22:37:30.356304   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:37:30.356337   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:37:30.357361   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:30.544479   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:30.544514   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:30.857809   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:30.866881   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:30.866920   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:28.652395   18929 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.017437883s)
	I0816 22:37:28.652577   18929 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0816 22:37:28.652647   18929 ssh_runner.go:149] Run: which lz4
	I0816 22:37:28.657345   18929 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0816 22:37:28.662555   18929 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0816 22:37:28.662584   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (928970367 bytes)
	I0816 22:37:31.357641   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:31.385946   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:31.385974   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:31.857651   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:31.878038   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:31.878070   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:32.357730   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:32.371926   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:32.371954   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:32.857204   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:32.867865   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 200:
	ok
	I0816 22:37:32.881085   18923 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:37:32.881113   18923 api_server.go:129] duration metric: took 8.025015474s to wait for apiserver health ...
	I0816 22:37:32.881124   18923 cni.go:93] Creating CNI manager for ""
	I0816 22:37:32.881132   18923 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:37:29.389763   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:31.391442   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:35.155848   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | SSH cmd err, output: exit status 255: 
	I0816 22:37:35.155882   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0816 22:37:35.155896   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | command : exit 0
	I0816 22:37:35.155905   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | err     : exit status 255
	I0816 22:37:35.155918   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | output  : 
	I0816 22:37:32.883184   18923 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:37:32.883268   18923 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:37:32.927942   18923 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:37:33.011939   18923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:37:33.043009   18923 system_pods.go:59] 8 kube-system pods found
	I0816 22:37:33.043056   18923 system_pods.go:61] "coredns-78fcd69978-nzf79" [a95afe1c-4f93-44a8-b669-b42c72f3500d] Running
	I0816 22:37:33.043064   18923 system_pods.go:61] "etcd-no-preload-20210816223156-6986" [fc40f0e0-16ef-4ba8-b5fd-17f4684d3a13] Running
	I0816 22:37:33.043076   18923 system_pods.go:61] "kube-apiserver-no-preload-20210816223156-6986" [f13df2c8-5aa8-49c3-89c0-b584ff8c62c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 22:37:33.043083   18923 system_pods.go:61] "kube-controller-manager-no-preload-20210816223156-6986" [8b866a1c-d283-4410-acbf-be2dbaa0f025] Running
	I0816 22:37:33.043094   18923 system_pods.go:61] "kube-proxy-64m6s" [fc5086fe-a671-4078-b76c-0c8f0656dca6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:37:33.043108   18923 system_pods.go:61] "kube-scheduler-no-preload-20210816223156-6986" [5db4c302-251a-47dc-90b9-424206ed445d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:37:33.043123   18923 system_pods.go:61] "metrics-server-7c784ccb57-44llk" [319102e5-661e-43bc-9c07-07463f6b1e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:37:33.043129   18923 system_pods.go:61] "storage-provisioner" [3da85640-a722-4ba1-a886-926bcaf81b8e] Running
	I0816 22:37:33.043140   18923 system_pods.go:74] duration metric: took 31.176037ms to wait for pod list to return data ...
	I0816 22:37:33.043149   18923 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:37:33.049500   18923 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:37:33.049531   18923 node_conditions.go:123] node cpu capacity is 2
	I0816 22:37:33.049544   18923 node_conditions.go:105] duration metric: took 6.385759ms to run NodePressure ...
	I0816 22:37:33.049562   18923 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:33.993434   18923 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:37:34.012191   18923 kubeadm.go:746] kubelet initialised
	I0816 22:37:34.012215   18923 kubeadm.go:747] duration metric: took 18.75429ms waiting for restarted kubelet to initialise ...
	I0816 22:37:34.012224   18923 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:37:34.033224   18923 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-nzf79" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:34.059145   18923 pod_ready.go:92] pod "coredns-78fcd69978-nzf79" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:34.059169   18923 pod_ready.go:81] duration metric: took 25.912051ms waiting for pod "coredns-78fcd69978-nzf79" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:34.059183   18923 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:32.660993   18929 containerd.go:546] Took 4.003687 seconds to copy over tarball
	I0816 22:37:32.661054   18929 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 22:37:33.892216   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:36.388385   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:38.156062   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Getting to WaitForSSH function...
	I0816 22:37:38.161988   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:38.162321   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:34:34 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:38.162379   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:38.162468   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Using SSH client type: external
	I0816 22:37:38.162499   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa (-rw-------)
	I0816 22:37:38.162538   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:37:38.162552   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | About to run SSH command:
	I0816 22:37:38.162570   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | exit 0
	I0816 22:37:36.102180   18923 pod_ready.go:102] pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:38.889153   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:41.402823   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | SSH cmd err, output: <nil>: 
	I0816 22:37:41.403283   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetConfigRaw
	I0816 22:37:41.404010   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetIP
	I0816 22:37:41.410017   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.410394   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.410432   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.410693   19204 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/config.json ...
	I0816 22:37:41.410926   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:41.411142   19204 machine.go:88] provisioning docker machine ...
	I0816 22:37:41.411167   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:41.411335   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetMachineName
	I0816 22:37:41.411477   19204 buildroot.go:166] provisioning hostname "default-k8s-different-port-20210816223418-6986"
	I0816 22:37:41.411499   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetMachineName
	I0816 22:37:41.411620   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:41.416760   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.417121   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.417154   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.417291   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:41.417487   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:41.417620   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:41.417769   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:41.417933   19204 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:41.418151   19204 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 22:37:41.418167   19204 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20210816223418-6986 && echo "default-k8s-different-port-20210816223418-6986" | sudo tee /etc/hostname
	I0816 22:37:41.560416   19204 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20210816223418-6986
	
	I0816 22:37:41.560449   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:41.566690   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.567028   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.567064   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.567351   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:41.567542   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:41.567703   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:41.567827   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:41.567996   19204 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:41.568193   19204 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 22:37:41.568221   19204 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20210816223418-6986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20210816223418-6986/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20210816223418-6986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:37:41.743484   19204 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:37:41.743518   19204 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:37:41.743559   19204 buildroot.go:174] setting up certificates
	I0816 22:37:41.743576   19204 provision.go:83] configureAuth start
	I0816 22:37:41.743593   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetMachineName
	I0816 22:37:41.743895   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetIP
	I0816 22:37:41.750014   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.750423   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.750467   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.750809   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:41.756158   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.756536   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.756569   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.756717   19204 provision.go:138] copyHostCerts
	I0816 22:37:41.756789   19204 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:37:41.756799   19204 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:37:41.756862   19204 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:37:41.756962   19204 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:37:41.756972   19204 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:37:41.756994   19204 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:37:41.757071   19204 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:37:41.757082   19204 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:37:41.757102   19204 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
	I0816 22:37:41.757156   19204 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20210816223418-6986 san=[192.168.50.186 192.168.50.186 localhost 127.0.0.1 minikube default-k8s-different-port-20210816223418-6986]
	I0816 22:37:42.356131   19204 provision.go:172] copyRemoteCerts
	I0816 22:37:42.356205   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:37:42.356250   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.362214   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.362513   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.362547   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.362780   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.362992   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.363219   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.363363   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:37:42.482862   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:37:42.512838   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0816 22:37:42.540047   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:37:42.568047   19204 provision.go:86] duration metric: configureAuth took 824.454088ms
	I0816 22:37:42.568077   19204 buildroot.go:189] setting minikube options for container-runtime
	I0816 22:37:42.568300   19204 config.go:177] Loaded profile config "default-k8s-different-port-20210816223418-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:37:42.568315   19204 machine.go:91] provisioned docker machine in 1.157156536s
	I0816 22:37:42.568324   19204 start.go:267] post-start starting for "default-k8s-different-port-20210816223418-6986" (driver="kvm2")
	I0816 22:37:42.568333   19204 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:37:42.568368   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.568715   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:37:42.568749   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.574488   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.574891   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.574928   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.575140   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.575339   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.575523   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.575710   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:37:42.676578   19204 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:37:42.682148   19204 info.go:137] Remote host: Buildroot 2020.02.12
	I0816 22:37:42.682181   19204 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:37:42.682247   19204 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:37:42.682409   19204 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0816 22:37:42.682558   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:37:42.691519   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:37:42.711453   19204 start.go:270] post-start completed in 143.110809ms
	I0816 22:37:42.711496   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.711732   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.718125   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.718511   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.718547   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.718832   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.719063   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.719246   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.719404   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.719588   19204 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:42.719762   19204 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 22:37:42.719775   19204 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0816 22:37:42.864591   19204 main.go:130] libmachine: SSH cmd err, output: <nil>: 1629153462.785763979
	
	I0816 22:37:42.864617   19204 fix.go:212] guest clock: 1629153462.785763979
	I0816 22:37:42.864627   19204 fix.go:225] Guest: 2021-08-16 22:37:42.785763979 +0000 UTC Remote: 2021-08-16 22:37:42.711713193 +0000 UTC m=+17.455762277 (delta=74.050786ms)
	I0816 22:37:42.864651   19204 fix.go:196] guest clock delta is within tolerance: 74.050786ms
	I0816 22:37:42.864660   19204 fix.go:57] fixHost completed within 17.405528602s
	I0816 22:37:42.864666   19204 start.go:80] releasing machines lock for "default-k8s-different-port-20210816223418-6986", held for 17.405551891s
	I0816 22:37:42.864711   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.864961   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetIP
	I0816 22:37:42.871077   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.871460   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.871504   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.871781   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.871990   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.872747   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.873035   19204 ssh_runner.go:149] Run: systemctl --version
	I0816 22:37:42.873067   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.873387   19204 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:37:42.873431   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.881178   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.881737   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.882041   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.882067   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.882095   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.882114   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.882432   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.882476   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.882624   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.882654   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.882754   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.882821   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.882852   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:37:42.882932   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:37:42.983824   19204 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:37:42.983945   19204 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:37:41.792417   18923 pod_ready.go:102] pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:42.110388   18923 pod_ready.go:92] pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:42.110425   18923 pod_ready.go:81] duration metric: took 8.051231395s waiting for pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.110443   18923 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.128769   18923 pod_ready.go:92] pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:42.128789   18923 pod_ready.go:81] duration metric: took 18.337432ms waiting for pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.128804   18923 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.137520   18923 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:42.137541   18923 pod_ready.go:81] duration metric: took 8.728281ms waiting for pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.137554   18923 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-64m6s" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.158798   18923 pod_ready.go:92] pod "kube-proxy-64m6s" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:42.158877   18923 pod_ready.go:81] duration metric: took 21.313805ms waiting for pod "kube-proxy-64m6s" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.158908   18923 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:44.512973   18923 pod_ready.go:102] pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:45.697026   18923 pod_ready.go:92] pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:45.697054   18923 pod_ready.go:81] duration metric: took 3.538123235s waiting for pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:45.697067   18923 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:44.369712   18929 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (11.708626678s)
	I0816 22:37:44.369752   18929 containerd.go:553] Took 11.708733 seconds t extract the tarball
	I0816 22:37:44.369766   18929 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0816 22:37:44.433232   18929 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:37:44.586357   18929 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:37:44.635654   18929 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0816 22:37:44.682553   18929 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0816 22:37:44.697822   18929 docker.go:153] disabling docker service ...
	I0816 22:37:44.697882   18929 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:37:44.709238   18929 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:37:44.720469   18929 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:37:44.857666   18929 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:37:44.991672   18929 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:37:45.005773   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:37:45.020903   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0816 22:37:45.035818   18929 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:37:45.045388   18929 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:37:45.045444   18929 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:37:45.065836   18929 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:37:45.073649   18929 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:37:45.210250   18929 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:37:45.536389   18929 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0816 22:37:45.536468   18929 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:37:45.543940   18929 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0816 22:37:46.648822   18929 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:37:46.654589   18929 start.go:413] Will wait 60s for crictl version
	I0816 22:37:46.654654   18929 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:37:46.687975   18929 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0816 22:37:46.688041   18929 ssh_runner.go:149] Run: containerd --version
	I0816 22:37:46.717960   18929 ssh_runner.go:149] Run: containerd --version
	I0816 22:37:43.671220   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:45.887022   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:47.896514   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:46.994449   19204 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.010481954s)
	I0816 22:37:46.994588   19204 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0816 22:37:46.994677   19204 ssh_runner.go:149] Run: which lz4
	I0816 22:37:46.999431   19204 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0816 22:37:47.004309   19204 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0816 22:37:47.004338   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (928970367 bytes)
	I0816 22:37:47.723452   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:49.727582   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:46.750218   18929 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0816 22:37:46.750266   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetIP
	I0816 22:37:46.755631   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:46.756018   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:46.756051   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:46.756195   18929 ssh_runner.go:149] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0816 22:37:46.760434   18929 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:37:46.770865   18929 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:37:46.770913   18929 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:37:46.804122   18929 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:37:46.804147   18929 containerd.go:517] Images already preloaded, skipping extraction
	I0816 22:37:46.804200   18929 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:37:46.836132   18929 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:37:46.836154   18929 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:37:46.836213   18929 ssh_runner.go:149] Run: sudo crictl info
	I0816 22:37:46.870224   18929 cni.go:93] Creating CNI manager for ""
	I0816 22:37:46.870256   18929 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:37:46.870269   18929 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:37:46.870282   18929 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.129 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20210816223333-6986 NodeName:embed-certs-20210816223333-6986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.105.129 CgroupDriver:cgroupf
s ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:37:46.870401   18929 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20210816223333-6986"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:37:46.870482   18929 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20210816223333-6986 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.105.129 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210816223333-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:37:46.870540   18929 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0816 22:37:46.878703   18929 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:37:46.878775   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:37:46.887763   18929 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (548 bytes)
	I0816 22:37:46.900548   18929 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 22:37:46.911899   18929 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2090 bytes)
	I0816 22:37:46.925412   18929 ssh_runner.go:149] Run: grep 192.168.105.129	control-plane.minikube.internal$ /etc/hosts
	I0816 22:37:46.929442   18929 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:37:46.939989   18929 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986 for IP: 192.168.105.129
	I0816 22:37:46.940054   18929 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:37:46.940073   18929 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:37:46.940143   18929 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/client.key
	I0816 22:37:46.940182   18929 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/apiserver.key.ff3abd74
	I0816 22:37:46.940203   18929 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/proxy-client.key
	I0816 22:37:46.940311   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem (1338 bytes)
	W0816 22:37:46.940364   18929 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
	I0816 22:37:46.940374   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 22:37:46.940398   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:37:46.940419   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:37:46.940453   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1675 bytes)
	I0816 22:37:46.940501   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:37:46.941607   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:37:46.959921   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 22:37:46.977073   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:37:46.995032   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 22:37:47.016388   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:37:47.036886   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 22:37:47.056736   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:37:47.076945   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 22:37:47.096512   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
	I0816 22:37:47.117888   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
	I0816 22:37:47.137952   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:37:47.159313   18929 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:37:47.173334   18929 ssh_runner.go:149] Run: openssl version
	I0816 22:37:47.179650   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:37:47.191486   18929 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:37:47.196524   18929 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:42 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:37:47.196589   18929 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:37:47.204162   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:37:47.214626   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
	I0816 22:37:47.226391   18929 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6986.pem
	I0816 22:37:47.234494   18929 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:49 /usr/share/ca-certificates/6986.pem
	I0816 22:37:47.234558   18929 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
	I0816 22:37:47.242705   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
	I0816 22:37:47.253305   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
	I0816 22:37:47.263502   18929 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/69862.pem
	I0816 22:37:47.268803   18929 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:49 /usr/share/ca-certificates/69862.pem
	I0816 22:37:47.268865   18929 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
	I0816 22:37:47.274964   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:37:47.283354   18929 kubeadm.go:390] StartCluster: {Name:embed-certs-20210816223333-6986 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3
ClusterName:embed-certs-20210816223333-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.129 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] Start
HostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:37:47.283503   18929 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0816 22:37:47.283565   18929 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:37:47.325446   18929 cri.go:76] found id: ""
	I0816 22:37:47.325557   18929 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:37:47.335659   18929 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:37:47.335682   18929 kubeadm.go:600] restartCluster start
	I0816 22:37:47.335733   18929 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:37:47.346292   18929 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:47.347565   18929 kubeconfig.go:117] verify returned: extract IP: "embed-certs-20210816223333-6986" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:37:47.348014   18929 kubeconfig.go:128] "embed-certs-20210816223333-6986" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:37:47.348788   18929 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:37:47.351634   18929 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:37:47.361663   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:47.361718   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:47.374579   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:47.574973   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:47.575059   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:47.589172   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:47.775434   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:47.775507   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:47.788957   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:47.975270   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:47.975360   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:47.989460   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.175680   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.175758   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.191429   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.375697   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.375790   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.386436   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.574665   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.574762   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.589082   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.775443   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.775512   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.791358   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.975634   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.975720   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.988259   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.175437   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.175544   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.190342   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.375596   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.375683   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.389601   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.574808   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.574892   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.585369   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.775000   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.775066   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.787982   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.975134   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.975231   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.986392   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:50.175658   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:50.175750   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:50.188143   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:50.375418   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:50.375514   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:50.387182   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:50.387201   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:50.387249   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:50.397435   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:50.397461   18929 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:37:50.397471   18929 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:37:50.397485   18929 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:37:50.397549   18929 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:37:50.439348   18929 cri.go:76] found id: ""
	I0816 22:37:50.439419   18929 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:37:50.459652   18929 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:37:50.469766   18929 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:37:50.469836   18929 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:37:50.479399   18929 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:37:50.479422   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:50.872420   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:50.387080   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:52.388399   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:53.358602   19204 containerd.go:546] Took 6.359210 seconds to copy over tarball
	I0816 22:37:53.358725   19204 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 22:37:51.735229   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:54.223000   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:52.412541   18929 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.540081052s)
	I0816 22:37:52.412575   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:52.718154   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:52.886875   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:53.025017   18929 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:37:53.025085   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:53.540988   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:54.040437   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:54.541392   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:55.040418   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:55.540381   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:54.887899   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:56.229434   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:58.302035   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:00.733041   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:56.040801   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:56.540669   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:57.040354   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:57.540386   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:58.040333   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:58.540400   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:59.040772   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:59.540444   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:00.041274   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:00.540645   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:00.741760   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:02.887487   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:03.393238   19204 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (10.034485098s)
	I0816 22:38:03.393270   19204 containerd.go:553] Took 10.034612 seconds t extract the tarball
	I0816 22:38:03.393282   19204 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0816 22:38:03.459021   19204 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:38:03.599477   19204 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:38:03.656046   19204 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0816 22:38:03.843112   19204 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0816 22:38:03.858574   19204 docker.go:153] disabling docker service ...
	I0816 22:38:03.858632   19204 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:38:03.872784   19204 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:38:03.886816   19204 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:38:04.029472   19204 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:38:04.164998   19204 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:38:04.176395   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:38:04.190579   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0816 22:38:04.204338   19204 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:38:04.211355   19204 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:38:04.211415   19204 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:38:04.229181   19204 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:38:04.236487   19204 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:38:04.368079   19204 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:38:03.226580   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:05.846484   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:01.040586   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:01.541229   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:02.041014   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:02.540773   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:03.040804   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:03.540654   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:04.041158   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:04.540403   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:05.041212   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:05.540477   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:05.871071   19204 ssh_runner.go:189] Completed: sudo systemctl restart containerd: (1.502953255s)
	I0816 22:38:05.871107   19204 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0816 22:38:05.871162   19204 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:38:05.876672   19204 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0816 22:38:06.981936   19204 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:38:06.987477   19204 start.go:413] Will wait 60s for crictl version
	I0816 22:38:06.987542   19204 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:38:07.019404   19204 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0816 22:38:07.019460   19204 ssh_runner.go:149] Run: containerd --version
	I0816 22:38:07.056241   19204 ssh_runner.go:149] Run: containerd --version
	I0816 22:38:05.841456   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:07.888564   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:07.088137   19204 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0816 22:38:07.088183   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetIP
	I0816 22:38:07.093462   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:38:07.093796   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:38:07.093832   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:38:07.093973   19204 ssh_runner.go:149] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 22:38:07.098921   19204 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:38:07.109221   19204 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:38:07.109293   19204 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:38:07.143575   19204 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:38:07.143601   19204 containerd.go:517] Images already preloaded, skipping extraction
	I0816 22:38:07.143659   19204 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:38:07.174105   19204 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:38:07.174129   19204 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:38:07.174182   19204 ssh_runner.go:149] Run: sudo crictl info
	I0816 22:38:07.212980   19204 cni.go:93] Creating CNI manager for ""
	I0816 22:38:07.213012   19204 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:38:07.213028   19204 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:38:07.213043   19204 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.186 APIServerPort:8444 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20210816223418-6986 NodeName:default-k8s-different-port-20210816223418-6986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.
50.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:38:07.213191   19204 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.186
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20210816223418-6986"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:38:07.213279   19204 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20210816223418-6986 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.186 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210816223418-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0816 22:38:07.213332   19204 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0816 22:38:07.222054   19204 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:38:07.222139   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:38:07.230063   19204 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (562 bytes)
	I0816 22:38:07.244461   19204 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 22:38:07.259892   19204 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0816 22:38:07.274883   19204 ssh_runner.go:149] Run: grep 192.168.50.186	control-plane.minikube.internal$ /etc/hosts
	I0816 22:38:07.280261   19204 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:38:07.293265   19204 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986 for IP: 192.168.50.186
	I0816 22:38:07.293314   19204 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:38:07.293333   19204 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:38:07.293384   19204 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/client.key
	I0816 22:38:07.293423   19204 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/apiserver.key.c5cc0a12
	I0816 22:38:07.293458   19204 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/proxy-client.key
	I0816 22:38:07.293569   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem (1338 bytes)
	W0816 22:38:07.293608   19204 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
	I0816 22:38:07.293618   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 22:38:07.293643   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:38:07.293668   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:38:07.293692   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1675 bytes)
	I0816 22:38:07.293738   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:38:07.294686   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:38:07.314730   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 22:38:07.332358   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:38:07.351920   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 22:38:07.369849   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:38:07.388099   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 22:38:07.406297   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:38:07.425998   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 22:38:07.443687   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:38:07.460832   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
	I0816 22:38:07.481210   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
	I0816 22:38:07.501717   19204 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:38:07.514903   19204 ssh_runner.go:149] Run: openssl version
	I0816 22:38:07.520949   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:38:07.531264   19204 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:38:07.536846   19204 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:42 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:38:07.536898   19204 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:38:07.543551   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:38:07.553322   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
	I0816 22:38:07.563414   19204 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6986.pem
	I0816 22:38:07.568579   19204 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:49 /usr/share/ca-certificates/6986.pem
	I0816 22:38:07.568631   19204 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
	I0816 22:38:07.574828   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
	I0816 22:38:07.582849   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
	I0816 22:38:07.591254   19204 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/69862.pem
	I0816 22:38:07.595981   19204 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:49 /usr/share/ca-certificates/69862.pem
	I0816 22:38:07.596044   19204 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
	I0816 22:38:07.602206   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:38:07.611191   19204 kubeadm.go:390] StartCluster: {Name:default-k8s-different-port-20210816223418-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.21.3 ClusterName:default-k8s-different-port-20210816223418-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_read
y:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:38:07.611272   19204 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0816 22:38:07.611319   19204 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:38:07.643146   19204 cri.go:76] found id: ""
	I0816 22:38:07.643226   19204 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:38:07.650886   19204 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:38:07.650919   19204 kubeadm.go:600] restartCluster start
	I0816 22:38:07.650971   19204 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:38:07.658653   19204 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:07.659605   19204 kubeconfig.go:117] verify returned: extract IP: "default-k8s-different-port-20210816223418-6986" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:38:07.660046   19204 kubeconfig.go:128] "default-k8s-different-port-20210816223418-6986" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:38:07.661820   19204 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:38:07.664797   19204 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:38:07.672378   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:07.672416   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:07.682197   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:07.882615   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:07.882689   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:07.893628   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.082995   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.083063   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.092764   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.283037   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.283112   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.293325   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.482586   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.482681   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.493502   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.682844   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.682915   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.693201   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.882416   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.882491   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.892118   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.082359   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.082457   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.092165   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.282385   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.282459   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.291528   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.482860   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.482930   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.493037   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.682335   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.682408   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.691945   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.883133   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.883193   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.892794   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.083140   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.083233   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.092308   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.223670   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:10.742112   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:06.041308   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:06.540690   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:07.041155   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:07.540839   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:08.040793   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:08.541292   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:09.041388   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:09.540943   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:10.041377   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:10.541237   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:10.386476   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:12.889815   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:10.282796   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.282889   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.292190   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.482261   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.482330   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.491729   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.683104   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.683186   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.693060   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.693079   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.693121   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.701893   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.701916   19204 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:38:10.701925   19204 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:38:10.701938   19204 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:38:10.701989   19204 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:38:10.740433   19204 cri.go:76] found id: ""
	I0816 22:38:10.740501   19204 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:38:10.756485   19204 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:38:10.765450   19204 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:38:10.765507   19204 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:38:10.772477   19204 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:38:10.772499   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:11.017384   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:12.671111   19204 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.653686174s)
	I0816 22:38:12.671155   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:12.947393   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:13.086256   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:13.215447   19204 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:38:13.215508   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:13.731105   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.231119   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.731093   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:15.231319   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:13.224797   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:15.723341   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:11.040800   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:11.540697   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:12.040673   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:12.541181   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:13.041152   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:13.541025   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.041183   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.541230   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.551768   18929 api_server.go:70] duration metric: took 21.526753133s to wait for apiserver process to appear ...
	I0816 22:38:14.551790   18929 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:38:14.551800   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:38:15.386344   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:16.395588   18635 pod_ready.go:92] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.395621   18635 pod_ready.go:81] duration metric: took 51.044447203s waiting for pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.395634   18635 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.408068   18635 pod_ready.go:92] pod "etcd-old-k8s-version-20210816223154-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.408086   18635 pod_ready.go:81] duration metric: took 12.443476ms waiting for pod "etcd-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.408096   18635 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.414488   18635 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20210816223154-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.414507   18635 pod_ready.go:81] duration metric: took 6.402316ms waiting for pod "kube-apiserver-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.414521   18635 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.420281   18635 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20210816223154-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.420300   18635 pod_ready.go:81] duration metric: took 5.769412ms waiting for pod "kube-controller-manager-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.420313   18635 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nvb2s" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.425411   18635 pod_ready.go:92] pod "kube-proxy-nvb2s" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.425430   18635 pod_ready.go:81] duration metric: took 5.109715ms waiting for pod "kube-proxy-nvb2s" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.425440   18635 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.784339   18635 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20210816223154-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.784360   18635 pod_ready.go:81] duration metric: took 358.911908ms waiting for pod "kube-scheduler-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.784371   18635 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:18.553150   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:38:18.553194   18929 api_server.go:101] status: https://192.168.105.129:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:38:19.053887   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:38:19.071151   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:38:19.071179   18929 api_server.go:101] status: https://192.168.105.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:38:19.553619   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:38:19.561382   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:38:19.561406   18929 api_server.go:101] status: https://192.168.105.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:38:20.053341   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:38:20.061527   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 200:
	ok
	I0816 22:38:20.069537   18929 api_server.go:139] control plane version: v1.21.3
	I0816 22:38:20.069560   18929 api_server.go:129] duration metric: took 5.517764917s to wait for apiserver health ...
	I0816 22:38:20.069572   18929 cni.go:93] Creating CNI manager for ""
	I0816 22:38:20.069581   18929 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:38:15.731207   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:16.231247   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:16.731268   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:17.230730   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:17.730956   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:18.231458   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:18.730950   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:19.230879   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:19.730819   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:20.230563   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:20.243853   19204 api_server.go:70] duration metric: took 7.028407985s to wait for apiserver process to appear ...
	I0816 22:38:20.243876   19204 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:38:20.243887   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:18.225200   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:20.243220   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:20.071659   18929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:38:20.071738   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:38:20.084719   18929 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:38:20.113939   18929 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:38:20.132494   18929 system_pods.go:59] 8 kube-system pods found
	I0816 22:38:20.132598   18929 system_pods.go:61] "coredns-558bd4d5db-jq6bb" [c088e8ae-638c-449f-b206-10b016f707f4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:38:20.132622   18929 system_pods.go:61] "etcd-embed-certs-20210816223333-6986" [350ff095-f45d-4c87-a10a-cbb9a0cc4358] Running
	I0816 22:38:20.132654   18929 system_pods.go:61] "kube-apiserver-embed-certs-20210816223333-6986" [7ee444e9-f198-4d9b-985e-b190a2e5e369] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 22:38:20.132667   18929 system_pods.go:61] "kube-controller-manager-embed-certs-20210816223333-6986" [c71ecc69-d617-48d3-a162-46d27aedd0a9] Running
	I0816 22:38:20.132676   18929 system_pods.go:61] "kube-proxy-8h6xz" [7cbdd516-13c5-469b-8e60-7dc0babb699a] Running
	I0816 22:38:20.132688   18929 system_pods.go:61] "kube-scheduler-embed-certs-20210816223333-6986" [4ebf165e-13c3-4f42-a75f-4301ea2f6c78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:38:20.132698   18929 system_pods.go:61] "metrics-server-7c784ccb57-9xpsr" [6b6283cf-0668-48a4-9f21-61cb5723f0b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:38:20.132704   18929 system_pods.go:61] "storage-provisioner" [7893460e-43c2-4606-8b56-c2ed9ac764bd] Running
	I0816 22:38:20.132712   18929 system_pods.go:74] duration metric: took 18.749758ms to wait for pod list to return data ...
	I0816 22:38:20.132721   18929 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:38:20.138564   18929 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:38:20.138614   18929 node_conditions.go:123] node cpu capacity is 2
	I0816 22:38:20.138632   18929 node_conditions.go:105] duration metric: took 5.904026ms to run NodePressure ...
	I0816 22:38:20.138651   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:20.830223   18929 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:38:20.835364   18929 kubeadm.go:746] kubelet initialised
	I0816 22:38:20.835384   18929 kubeadm.go:747] duration metric: took 5.139864ms waiting for restarted kubelet to initialise ...
	I0816 22:38:20.835392   18929 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:38:20.841354   18929 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:19.191797   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:21.192936   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:25.244953   19204 api_server.go:255] stopped: https://192.168.50.186:8444/healthz: Get "https://192.168.50.186:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 22:38:22.723414   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:25.223163   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:22.860677   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:24.863916   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:23.690499   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:25.690995   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:27.691820   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:25.746028   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:27.721976   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:29.722107   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:27.361030   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:29.361218   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:30.190894   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:32.192100   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:30.746969   19204 api_server.go:255] stopped: https://192.168.50.186:8444/healthz: Get "https://192.168.50.186:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 22:38:31.245148   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:32.224115   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:34.723153   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:31.859919   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:33.863770   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:34.691552   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:36.693980   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:36.246218   19204 api_server.go:255] stopped: https://192.168.50.186:8444/healthz: Get "https://192.168.50.186:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 22:38:36.745853   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:37.223369   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:39.239225   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:36.360668   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:38.361218   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:40.871372   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:41.344967   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:38:41.344991   19204 api_server.go:101] status: https://192.168.50.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:38:41.745061   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:41.754168   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:38:41.754195   19204 api_server.go:101] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:38:42.245898   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:42.258458   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:38:42.258509   19204 api_server.go:101] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:38:42.745610   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:42.756658   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 200:
	ok
	I0816 22:38:42.770293   19204 api_server.go:139] control plane version: v1.21.3
	I0816 22:38:42.770321   19204 api_server.go:129] duration metric: took 22.526438535s to wait for apiserver health ...
	I0816 22:38:42.770332   19204 cni.go:93] Creating CNI manager for ""
	I0816 22:38:42.770339   19204 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:38:39.192176   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:41.198006   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:42.772377   19204 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:38:42.772434   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:38:42.788298   19204 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:38:42.809709   19204 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:38:42.824805   19204 system_pods.go:59] 8 kube-system pods found
	I0816 22:38:42.824843   19204 system_pods.go:61] "coredns-558bd4d5db-ssfkf" [eb30728b-0eae-41d8-90bc-d8de8c6b4caa] Running
	I0816 22:38:42.824857   19204 system_pods.go:61] "etcd-default-k8s-different-port-20210816223418-6986" [825a27d4-d8dc-4dbe-a724-ac2e59508c5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 22:38:42.824865   19204 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210816223418-6986" [a3383733-5a20-4b5a-aeab-df3e61e37d94] Running
	I0816 22:38:42.824882   19204 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210816223418-6986" [42f433b1-271b-41a6-96a0-ab85fe6ba28e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 22:38:42.824896   19204 system_pods.go:61] "kube-proxy-psg4t" [98ca6629-d521-445d-99c2-b7e7ddf3b973] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:38:42.824905   19204 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210816223418-6986" [bef50322-5dc7-4680-b867-e17eb23298a8] Running
	I0816 22:38:42.824919   19204 system_pods.go:61] "metrics-server-7c784ccb57-rmrr6" [325f4892-3ae2-4a08-bc13-22c74c15c362] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:38:42.824929   19204 system_pods.go:61] "storage-provisioner" [89aadc6c-b5b0-47eb-b6e0-0f5fb78b1689] Running
	I0816 22:38:42.824936   19204 system_pods.go:74] duration metric: took 15.209253ms to wait for pod list to return data ...
	I0816 22:38:42.824947   19204 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:38:42.835095   19204 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:38:42.835144   19204 node_conditions.go:123] node cpu capacity is 2
	I0816 22:38:42.835160   19204 node_conditions.go:105] duration metric: took 10.206913ms to run NodePressure ...
	I0816 22:38:42.835178   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:43.431532   19204 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:38:43.443469   19204 kubeadm.go:746] kubelet initialised
	I0816 22:38:43.443543   19204 kubeadm.go:747] duration metric: took 11.973692ms waiting for restarted kubelet to initialise ...
	I0816 22:38:43.443567   19204 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:38:43.467119   19204 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-ssfkf" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:43.487197   19204 pod_ready.go:92] pod "coredns-558bd4d5db-ssfkf" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:43.487224   19204 pod_ready.go:81] duration metric: took 20.062907ms waiting for pod "coredns-558bd4d5db-ssfkf" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:43.487236   19204 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:41.723036   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:43.727234   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:42.883394   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:45.360217   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:43.692394   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:46.195001   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:45.513670   19204 pod_ready.go:102] pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:47.520170   19204 pod_ready.go:102] pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:50.012608   19204 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:50.012643   19204 pod_ready.go:81] duration metric: took 6.525398312s waiting for pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:50.012653   19204 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:50.018616   19204 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:50.018632   19204 pod_ready.go:81] duration metric: took 5.971078ms waiting for pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:50.018641   19204 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:46.223793   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:48.231527   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:50.721902   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:47.864929   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:50.359955   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:48.690708   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:50.691511   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:53.191133   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:52.030327   19204 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:54.530276   19204 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:52.723113   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:54.730785   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:52.865142   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:55.362902   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:55.692797   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:58.193231   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:56.537583   19204 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:59.032998   19204 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:59.531144   19204 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:59.531179   19204 pod_ready.go:81] duration metric: took 9.512530001s waiting for pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:59.531194   19204 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-psg4t" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:57.227423   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:59.722421   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:57.860847   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:00.383065   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:00.194401   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:02.693032   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:01.045104   19204 pod_ready.go:92] pod "kube-proxy-psg4t" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.045136   19204 pod_ready.go:81] duration metric: took 1.513934389s waiting for pod "kube-proxy-psg4t" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.045162   19204 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:03.065559   19204 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:02.225371   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:04.231432   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:01.360648   18929 pod_ready.go:92] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.360679   18929 pod_ready.go:81] duration metric: took 40.519291305s waiting for pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.360692   18929 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.377816   18929 pod_ready.go:92] pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.377835   18929 pod_ready.go:81] duration metric: took 17.135128ms waiting for pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.377844   18929 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.384900   18929 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.384919   18929 pod_ready.go:81] duration metric: took 7.067915ms waiting for pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.384928   18929 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.391593   18929 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.391615   18929 pod_ready.go:81] duration metric: took 6.679953ms waiting for pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.391628   18929 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8h6xz" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.397839   18929 pod_ready.go:92] pod "kube-proxy-8h6xz" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.397859   18929 pod_ready.go:81] duration metric: took 6.224125ms waiting for pod "kube-proxy-8h6xz" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.397870   18929 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.757203   18929 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.757231   18929 pod_ready.go:81] duration metric: took 359.352415ms waiting for pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.757245   18929 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:04.166965   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:05.190883   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:07.691413   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:05.560049   19204 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:07.563106   19204 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:10.058732   19204 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:06.241105   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:08.721067   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:10.729982   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:06.173818   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:08.671197   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:10.190249   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:12.190937   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:11.058551   19204 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:11.058589   19204 pod_ready.go:81] duration metric: took 10.013415785s waiting for pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:11.058602   19204 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:13.079741   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:13.222923   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:15.223480   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:11.169568   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:13.668888   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:15.675907   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:14.691328   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:17.193097   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:15.574185   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:18.080714   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:17.721688   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:20.223136   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:18.166872   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:20.167888   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:19.690743   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:21.695097   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:20.573176   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:22.575373   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:25.080599   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:22.721982   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:24.723334   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:22.674385   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:25.168465   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:24.191127   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:26.692188   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:27.576508   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:30.077538   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:26.725975   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:29.222550   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:27.667108   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:29.672819   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:29.190076   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:31.191096   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:32.573255   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:34.574846   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:31.222778   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:33.721695   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:35.722989   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:32.167222   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:34.168925   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:33.691602   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:35.693194   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:38.192247   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:36.575818   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:39.074280   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:37.724177   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:40.222061   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:36.667227   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:38.667709   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:40.193105   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:42.691214   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:41.577819   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:44.074371   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:42.222318   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:44.223676   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:41.169382   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:43.169678   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:45.172140   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:44.692521   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:46.693152   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:46.080520   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:48.574175   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:46.226822   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:48.723407   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:50.723464   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:47.669324   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:50.168305   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:49.191566   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:51.192223   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:50.574493   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:53.072736   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:55.075288   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:53.226025   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:55.722244   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:52.667088   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:54.668826   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:53.690899   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:55.692317   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:58.190689   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:57.076942   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:59.573822   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:58.225641   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:00.721925   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:57.165321   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:59.171812   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:00.194014   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:02.691574   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:01.573901   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:04.073928   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:02.724585   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:04.724644   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:01.175154   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:03.669857   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:05.191832   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:07.693327   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:06.576903   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:09.078443   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:07.222275   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:09.224637   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:06.167190   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:08.168551   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:10.668660   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:10.191769   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:12.693193   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:11.574665   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:13.576508   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:11.224838   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:13.721159   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:15.727256   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:12.670244   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:15.167885   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:15.194325   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:17.692108   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:16.072818   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:18.078890   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:17.729812   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:20.226491   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:17.177047   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:19.217251   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:20.192280   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:22.693518   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:20.574552   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:22.574777   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:24.577476   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:22.727579   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:24.728352   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:21.668537   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:24.167106   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:25.191135   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:27.191723   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:27.075236   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:29.574554   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:27.223601   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:29.225348   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:26.172206   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:28.666902   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:30.667512   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:29.693817   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:32.192170   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:32.073947   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:34.076857   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:31.806875   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:34.222064   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:32.670097   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:35.167425   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:34.193574   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:36.692421   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:36.575233   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:39.074418   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:36.223456   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:38.224575   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:40.721673   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:37.168398   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:39.172793   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:39.196016   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:41.690324   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:41.075116   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:43.576123   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:42.724088   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:44.724675   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:41.674073   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:44.170704   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:43.693077   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:45.693362   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:48.190525   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:45.576264   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:48.077395   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:46.729980   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:49.221967   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:46.171454   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:48.665714   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:50.668334   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:50.193564   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:52.691234   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:50.572686   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:52.574382   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:55.074999   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:51.222668   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:53.226343   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:55.725259   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:52.673171   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:55.168585   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:54.692513   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:57.191126   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:57.079875   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:59.573017   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:58.221527   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:00.227502   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:57.671255   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:00.168665   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:59.691534   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:01.693478   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:01.582883   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:04.072426   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:02.722966   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:04.727296   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:02.173240   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:04.665480   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:04.191798   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:06.691447   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:06.073825   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:08.074664   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:10.075325   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:07.223517   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:09.721892   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:06.667330   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:08.671220   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:09.191192   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:11.691389   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:12.076107   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:14.575585   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:11.725914   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:13.730699   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:11.169385   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:13.673312   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:14.191060   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:16.192184   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:16.576492   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:19.076650   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:16.225569   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:18.724188   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:20.724698   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:16.165664   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:18.166105   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:20.166339   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:18.691871   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:20.691922   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:23.191074   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:21.574173   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:24.075930   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:23.223119   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:25.223978   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:22.173729   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:24.666435   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:25.692064   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:27.693165   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:26.574028   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:28.577627   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:27.723162   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:30.225428   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:26.666698   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:28.667290   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:30.669320   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:30.191236   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:32.194129   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:31.078550   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:33.574708   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:32.272795   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:34.721477   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:32.670349   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:35.166861   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:34.691270   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:36.693071   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:36.073462   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:38.075367   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:36.731674   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:39.226976   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:37.170645   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:39.724821   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:39.190190   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:41.192605   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:43.194313   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:40.572815   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:43.074323   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:41.728026   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:44.222098   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:45.713684   18923 pod_ready.go:81] duration metric: took 4m0.016600156s waiting for pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace to be "Ready" ...
	E0816 22:41:45.713707   18923 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:41:45.713739   18923 pod_ready.go:38] duration metric: took 4m11.701504099s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:41:45.713769   18923 kubeadm.go:604] restartCluster took 4m33.579475629s
	W0816 22:41:45.713944   18923 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:41:45.714027   18923 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0816 22:41:42.167746   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:44.671010   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:45.690207   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:47.696181   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:45.573577   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:47.577169   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:50.074120   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:49.532312   18923 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.817885262s)
	I0816 22:41:49.532396   18923 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:41:49.547377   18923 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:41:49.547460   18923 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:41:49.586205   18923 cri.go:76] found id: "c265ff52803b42d9b0ff6036d0ca9fe4b655c9875474baa9a0a3f7255d230e17"
	I0816 22:41:49.586231   18923 cri.go:76] found id: ""
	W0816 22:41:49.586237   18923 kubeadm.go:840] found 1 kube-system containers to stop
	I0816 22:41:49.586243   18923 cri.go:221] Stopping containers: [c265ff52803b42d9b0ff6036d0ca9fe4b655c9875474baa9a0a3f7255d230e17]
	I0816 22:41:49.586286   18923 ssh_runner.go:149] Run: which crictl
	I0816 22:41:49.590992   18923 ssh_runner.go:149] Run: sudo /bin/crictl stop c265ff52803b42d9b0ff6036d0ca9fe4b655c9875474baa9a0a3f7255d230e17
	I0816 22:41:49.626874   18923 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:41:49.635033   18923 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:41:49.643072   18923 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:41:49.643114   18923 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0816 22:41:46.671498   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:49.167852   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:50.191302   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:52.194912   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:52.573508   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:54.574289   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:51.170118   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:53.672114   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:54.691353   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:56.691660   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:57.075408   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:59.575201   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:56.166934   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:58.175241   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:00.668070   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:58.692572   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:00.693110   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:02.693563   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:02.073370   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:04.074072   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:03.171450   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:05.675018   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:05.192214   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:07.692700   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:08.829041   18923 out.go:204]   - Generating certificates and keys ...
	I0816 22:42:08.831708   18923 out.go:204]   - Booting up control plane ...
	I0816 22:42:08.834200   18923 out.go:204]   - Configuring RBAC rules ...
	I0816 22:42:08.836416   18923 cni.go:93] Creating CNI manager for ""
	I0816 22:42:08.836433   18923 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:42:06.578343   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:08.578554   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:08.838017   18923 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:42:08.838073   18923 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:42:08.846501   18923 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:42:08.869457   18923 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:42:08.869501   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:08.869527   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=no-preload-20210816223156-6986 minikube.k8s.io/updated_at=2021_08_16T22_42_08_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:09.240543   18923 ops.go:34] apiserver oom_adj: -16
	I0816 22:42:09.240662   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:09.839173   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:10.338906   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:10.839126   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:08.175656   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:10.670201   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:09.693093   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:12.193949   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:11.076847   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:13.572667   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:11.339623   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:11.839145   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:12.339335   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:12.839352   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:13.339016   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:13.838633   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:14.339209   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:14.839574   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:15.338605   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:15.838986   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:13.166828   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:15.170558   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:14.195434   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:16.691097   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:17.183312   18635 pod_ready.go:81] duration metric: took 4m0.398928004s waiting for pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace to be "Ready" ...
	E0816 22:42:17.183337   18635 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:42:17.183357   18635 pod_ready.go:38] duration metric: took 4m51.857756569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:42:17.183387   18635 kubeadm.go:604] restartCluster took 5m19.62322748s
	W0816 22:42:17.183554   18635 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:42:17.183589   18635 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0816 22:42:15.573445   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:17.576213   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:19.578780   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:16.339618   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:16.839112   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:17.338889   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:17.838606   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:18.339509   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:18.839537   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:19.338632   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:19.839240   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:20.339527   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:20.838664   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:17.671899   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:19.672963   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:20.586991   18635 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.403367986s)
	I0816 22:42:20.587083   18635 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:42:20.603414   18635 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:42:20.603499   18635 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:42:20.644469   18635 cri.go:76] found id: ""
	I0816 22:42:20.644547   18635 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:42:20.654179   18635 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:42:20.664747   18635 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:42:20.664790   18635 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
	I0816 22:42:21.326940   18635 out.go:204]   - Generating certificates and keys ...
	I0816 22:42:21.189008   18923 kubeadm.go:985] duration metric: took 12.319564991s to wait for elevateKubeSystemPrivileges.
	I0816 22:42:21.189042   18923 kubeadm.go:392] StartCluster complete in 5m9.132482632s
	I0816 22:42:21.189068   18923 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:42:21.189186   18923 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:42:21.191084   18923 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0816 22:42:21.253468   18923 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0816 22:42:22.263255   18923 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20210816223156-6986" rescaled to 1
	I0816 22:42:22.263323   18923 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.116.66 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0816 22:42:22.265111   18923 out.go:177] * Verifying Kubernetes components...
	I0816 22:42:22.265169   18923 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:42:22.263389   18923 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:42:22.263413   18923 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:42:22.265318   18923 addons.go:59] Setting storage-provisioner=true in profile "no-preload-20210816223156-6986"
	I0816 22:42:22.265343   18923 addons.go:135] Setting addon storage-provisioner=true in "no-preload-20210816223156-6986"
	I0816 22:42:22.265343   18923 addons.go:59] Setting dashboard=true in profile "no-preload-20210816223156-6986"
	W0816 22:42:22.265352   18923 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:42:22.265365   18923 addons.go:135] Setting addon dashboard=true in "no-preload-20210816223156-6986"
	W0816 22:42:22.265384   18923 addons.go:147] addon dashboard should already be in state true
	I0816 22:42:22.263563   18923 config.go:177] Loaded profile config "no-preload-20210816223156-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0816 22:42:22.265401   18923 addons.go:59] Setting metrics-server=true in profile "no-preload-20210816223156-6986"
	I0816 22:42:22.265412   18923 host.go:66] Checking if "no-preload-20210816223156-6986" exists ...
	I0816 22:42:22.265427   18923 addons.go:135] Setting addon metrics-server=true in "no-preload-20210816223156-6986"
	W0816 22:42:22.265437   18923 addons.go:147] addon metrics-server should already be in state true
	I0816 22:42:22.265384   18923 addons.go:59] Setting default-storageclass=true in profile "no-preload-20210816223156-6986"
	I0816 22:42:22.265462   18923 host.go:66] Checking if "no-preload-20210816223156-6986" exists ...
	I0816 22:42:22.265390   18923 host.go:66] Checking if "no-preload-20210816223156-6986" exists ...
	I0816 22:42:22.265461   18923 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20210816223156-6986"
	I0816 22:42:22.265940   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.265944   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.265957   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.265942   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.265975   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.265986   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.266089   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.266123   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.281969   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45777
	I0816 22:42:22.282708   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.282877   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40837
	I0816 22:42:22.283046   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42349
	I0816 22:42:22.283302   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.283322   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.283427   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.283650   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.283893   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.284078   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.284092   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.284330   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.284347   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.284461   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.284627   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.284665   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.284970   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.285003   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.285116   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.285285   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.293128   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38523
	I0816 22:42:22.293558   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.294059   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.294082   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.294429   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.294987   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.295053   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.298092   18923 addons.go:135] Setting addon default-storageclass=true in "no-preload-20210816223156-6986"
	W0816 22:42:22.298118   18923 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:42:22.298147   18923 host.go:66] Checking if "no-preload-20210816223156-6986" exists ...
	I0816 22:42:22.298560   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.298601   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.302416   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44833
	I0816 22:42:22.302994   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.303562   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.303593   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.304002   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.304209   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.305854   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34121
	I0816 22:42:22.306273   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.307236   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.307263   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.307631   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.307783   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.308340   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .DriverName
	I0816 22:42:22.310958   18923 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:42:22.311023   18923 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:42:22.311044   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:42:22.311064   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHHostname
	I0816 22:42:22.311377   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .DriverName
	I0816 22:42:22.313216   18923 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:42:22.311947   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45779
	I0816 22:42:22.313321   18923 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:42:22.313337   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:42:22.312981   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38669
	I0816 22:42:22.313354   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHHostname
	I0816 22:42:22.313674   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.313848   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.314124   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.314144   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.314391   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.314413   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.314493   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.314698   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.314875   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.315544   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.315591   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.319514   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .DriverName
	I0816 22:42:22.319736   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.321507   18923 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:42:22.320102   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:66:08", ip: ""} in network mk-no-preload-20210816223156-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:37:07 +0000 UTC Type:0 Mac:52:54:00:1e:66:08 Iaid: IPaddr:192.168.116.66 Prefix:24 Hostname:no-preload-20210816223156-6986 Clientid:01:52:54:00:1e:66:08}
	I0816 22:42:22.320309   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHPort
	I0816 22:42:22.320694   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.321331   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHPort
	I0816 22:42:22.321669   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:66:08", ip: ""} in network mk-no-preload-20210816223156-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:37:07 +0000 UTC Type:0 Mac:52:54:00:1e:66:08 Iaid: IPaddr:192.168.116.66 Prefix:24 Hostname:no-preload-20210816223156-6986 Clientid:01:52:54:00:1e:66:08}
	I0816 22:42:22.321594   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined IP address 192.168.116.66 and MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.180281   18635 out.go:204]   - Booting up control plane ...
	I0816 22:42:22.073806   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:24.079495   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:22.323189   18923 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:42:22.321708   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined IP address 192.168.116.66 and MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.321766   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHKeyPath
	I0816 22:42:22.321808   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHKeyPath
	I0816 22:42:22.323243   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:42:22.323341   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:42:22.323363   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHHostname
	I0816 22:42:22.323468   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHUsername
	I0816 22:42:22.323473   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHUsername
	I0816 22:42:22.323663   18923 sshutil.go:53] new ssh client: &{IP:192.168.116.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816223156-6986/id_rsa Username:docker}
	I0816 22:42:22.323678   18923 sshutil.go:53] new ssh client: &{IP:192.168.116.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816223156-6986/id_rsa Username:docker}
	I0816 22:42:22.328724   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45831
	I0816 22:42:22.329130   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.329535   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.329554   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.329851   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.329938   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.330124   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.330329   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:66:08", ip: ""} in network mk-no-preload-20210816223156-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:37:07 +0000 UTC Type:0 Mac:52:54:00:1e:66:08 Iaid: IPaddr:192.168.116.66 Prefix:24 Hostname:no-preload-20210816223156-6986 Clientid:01:52:54:00:1e:66:08}
	I0816 22:42:22.330363   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined IP address 192.168.116.66 and MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.330478   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHPort
	I0816 22:42:22.330620   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHKeyPath
	I0816 22:42:22.330750   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHUsername
	I0816 22:42:22.330873   18923 sshutil.go:53] new ssh client: &{IP:192.168.116.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816223156-6986/id_rsa Username:docker}
	I0816 22:42:22.333001   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .DriverName
	I0816 22:42:22.333246   18923 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:42:22.333262   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:42:22.333279   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHHostname
	I0816 22:42:22.338603   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.339024   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:66:08", ip: ""} in network mk-no-preload-20210816223156-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:37:07 +0000 UTC Type:0 Mac:52:54:00:1e:66:08 Iaid: IPaddr:192.168.116.66 Prefix:24 Hostname:no-preload-20210816223156-6986 Clientid:01:52:54:00:1e:66:08}
	I0816 22:42:22.339055   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined IP address 192.168.116.66 and MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.339242   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHPort
	I0816 22:42:22.339393   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHKeyPath
	I0816 22:42:22.339570   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHUsername
	I0816 22:42:22.339731   18923 sshutil.go:53] new ssh client: &{IP:192.168.116.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816223156-6986/id_rsa Username:docker}
	I0816 22:42:22.671302   18923 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:42:22.671331   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:42:22.674471   18923 node_ready.go:35] waiting up to 6m0s for node "no-preload-20210816223156-6986" to be "Ready" ...
	I0816 22:42:22.674764   18923 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.116.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:42:22.680985   18923 node_ready.go:49] node "no-preload-20210816223156-6986" has status "Ready":"True"
	I0816 22:42:22.681006   18923 node_ready.go:38] duration metric: took 6.219914ms waiting for node "no-preload-20210816223156-6986" to be "Ready" ...
	I0816 22:42:22.681017   18923 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:42:22.690584   18923 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:22.758871   18923 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:42:22.908102   18923 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:42:22.908132   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:42:23.011738   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:42:23.011768   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:42:23.048103   18923 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:42:23.113442   18923 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:42:23.113472   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:42:23.311431   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:42:23.311461   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:42:23.413450   18923 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:42:23.601523   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:42:23.601554   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:42:23.797882   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:42:23.797908   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:42:23.957080   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:42:23.957109   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:42:24.496102   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:42:24.496134   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:42:24.715720   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:42:24.715807   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:42:24.725833   18923 pod_ready.go:102] pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:24.991135   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:42:24.991165   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:42:25.061259   18923 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.116.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.386242884s)
	I0816 22:42:25.061297   18923 start.go:728] {"host.minikube.internal": 192.168.116.1} host record injected into CoreDNS
	I0816 22:42:25.085411   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:42:25.085463   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:42:25.132722   18923 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:42:25.402705   18923 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.64379015s)
	I0816 22:42:25.402772   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.402790   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.403123   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.403222   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:25.403245   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.403270   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.403197   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:25.403597   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:25.404574   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.404594   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:25.404607   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.404616   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.404837   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.404878   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:25.431424   18923 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.383276848s)
	I0816 22:42:25.431470   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.431484   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.431767   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.431781   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:25.431788   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:25.431799   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.431810   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.432092   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.432111   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:22.168138   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:24.174050   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:26.094382   18923 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.680878058s)
	I0816 22:42:26.094446   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:26.094474   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:26.094773   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:26.094830   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:26.094859   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:26.094885   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:26.094774   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:26.095167   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:26.095182   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:26.095193   18923 addons.go:313] Verifying addon metrics-server=true in "no-preload-20210816223156-6986"
	I0816 22:42:26.855647   18923 pod_ready.go:102] pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:27.149522   18923 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.016735128s)
	I0816 22:42:27.149590   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:27.149605   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:27.149955   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:27.150053   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:27.150073   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:27.150083   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:27.150094   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:27.150330   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:27.150347   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:26.575022   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:28.575534   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:27.153345   18923 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0816 22:42:27.153375   18923 addons.go:344] enableAddons completed in 4.88997344s
	I0816 22:42:28.729990   18923 pod_ready.go:92] pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:28.730033   18923 pod_ready.go:81] duration metric: took 6.039413295s waiting for pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:28.730047   18923 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-vf8l9" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.743600   18923 pod_ready.go:97] error getting pod "coredns-78fcd69978-vf8l9" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-vf8l9" not found
	I0816 22:42:30.743642   18923 pod_ready.go:81] duration metric: took 2.013586217s waiting for pod "coredns-78fcd69978-vf8l9" in "kube-system" namespace to be "Ready" ...
	E0816 22:42:30.743656   18923 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-78fcd69978-vf8l9" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-vf8l9" not found
	I0816 22:42:30.743666   18923 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.757721   18923 pod_ready.go:92] pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.757745   18923 pod_ready.go:81] duration metric: took 14.064042ms waiting for pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.757758   18923 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.767053   18923 pod_ready.go:92] pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.767087   18923 pod_ready.go:81] duration metric: took 9.317684ms waiting for pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.767102   18923 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.777595   18923 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.777619   18923 pod_ready.go:81] duration metric: took 10.507966ms waiting for pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.777632   18923 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jhqbx" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.790967   18923 pod_ready.go:92] pod "kube-proxy-jhqbx" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.790991   18923 pod_ready.go:81] duration metric: took 13.350231ms waiting for pod "kube-proxy-jhqbx" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.791003   18923 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:26.174733   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:28.675892   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:30.951607   18923 pod_ready.go:92] pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.951630   18923 pod_ready.go:81] duration metric: took 160.617881ms waiting for pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.951642   18923 pod_ready.go:38] duration metric: took 8.270610362s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:42:30.951663   18923 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:42:30.951723   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:42:30.970609   18923 api_server.go:70] duration metric: took 8.707242252s to wait for apiserver process to appear ...
	I0816 22:42:30.970637   18923 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:42:30.970650   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:42:30.979459   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 200:
	ok
	I0816 22:42:30.980742   18923 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:42:30.980766   18923 api_server.go:129] duration metric: took 10.122149ms to wait for apiserver health ...
	I0816 22:42:30.980777   18923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:42:31.156911   18923 system_pods.go:59] 8 kube-system pods found
	I0816 22:42:31.156942   18923 system_pods.go:61] "coredns-78fcd69978-9rlk6" [83d3b042-5692-4c44-b6f2-65020120666e] Running
	I0816 22:42:31.156949   18923 system_pods.go:61] "etcd-no-preload-20210816223156-6986" [4ea832d0-9ae2-4d9e-9aed-aa581e1ef4cf] Running
	I0816 22:42:31.156956   18923 system_pods.go:61] "kube-apiserver-no-preload-20210816223156-6986" [048331fe-fed3-4cca-a2c6-3b377e26647b] Running
	I0816 22:42:31.156965   18923 system_pods.go:61] "kube-controller-manager-no-preload-20210816223156-6986" [704424de-e6c8-4c70-bd80-c9b68c4c4b57] Running
	I0816 22:42:31.156971   18923 system_pods.go:61] "kube-proxy-jhqbx" [edacd358-46da-4db4-a8db-098f6edefb76] Running
	I0816 22:42:31.156977   18923 system_pods.go:61] "kube-scheduler-no-preload-20210816223156-6986" [7674933f-6de0-4090-867c-c082293d963c] Running
	I0816 22:42:31.156988   18923 system_pods.go:61] "metrics-server-7c784ccb57-dfjww" [7a744b20-6d7f-4001-a322-7e5615cbf15f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:42:31.156998   18923 system_pods.go:61] "storage-provisioner" [c8a5ee3a-f5df-4eeb-92b1-443ecef04ea2] Running
	I0816 22:42:31.157005   18923 system_pods.go:74] duration metric: took 176.222595ms to wait for pod list to return data ...
	I0816 22:42:31.157016   18923 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:42:31.345286   18923 default_sa.go:45] found service account: "default"
	I0816 22:42:31.345311   18923 default_sa.go:55] duration metric: took 188.289571ms for default service account to be created ...
	I0816 22:42:31.345319   18923 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:42:31.555450   18923 system_pods.go:86] 8 kube-system pods found
	I0816 22:42:31.555481   18923 system_pods.go:89] "coredns-78fcd69978-9rlk6" [83d3b042-5692-4c44-b6f2-65020120666e] Running
	I0816 22:42:31.555490   18923 system_pods.go:89] "etcd-no-preload-20210816223156-6986" [4ea832d0-9ae2-4d9e-9aed-aa581e1ef4cf] Running
	I0816 22:42:31.555497   18923 system_pods.go:89] "kube-apiserver-no-preload-20210816223156-6986" [048331fe-fed3-4cca-a2c6-3b377e26647b] Running
	I0816 22:42:31.555503   18923 system_pods.go:89] "kube-controller-manager-no-preload-20210816223156-6986" [704424de-e6c8-4c70-bd80-c9b68c4c4b57] Running
	I0816 22:42:31.555509   18923 system_pods.go:89] "kube-proxy-jhqbx" [edacd358-46da-4db4-a8db-098f6edefb76] Running
	I0816 22:42:31.555515   18923 system_pods.go:89] "kube-scheduler-no-preload-20210816223156-6986" [7674933f-6de0-4090-867c-c082293d963c] Running
	I0816 22:42:31.555529   18923 system_pods.go:89] "metrics-server-7c784ccb57-dfjww" [7a744b20-6d7f-4001-a322-7e5615cbf15f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:42:31.555541   18923 system_pods.go:89] "storage-provisioner" [c8a5ee3a-f5df-4eeb-92b1-443ecef04ea2] Running
	I0816 22:42:31.555553   18923 system_pods.go:126] duration metric: took 210.228822ms to wait for k8s-apps to be running ...
	I0816 22:42:31.555566   18923 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:42:31.555615   18923 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:42:31.581892   18923 system_svc.go:56] duration metric: took 26.318542ms WaitForService to wait for kubelet.
	I0816 22:42:31.581920   18923 kubeadm.go:547] duration metric: took 9.318562144s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:42:31.581949   18923 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:42:31.744656   18923 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:42:31.744683   18923 node_conditions.go:123] node cpu capacity is 2
	I0816 22:42:31.744699   18923 node_conditions.go:105] duration metric: took 162.745304ms to run NodePressure ...
	I0816 22:42:31.744708   18923 start.go:231] waiting for startup goroutines ...
	I0816 22:42:31.799332   18923 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0816 22:42:31.801873   18923 out.go:177] 
	W0816 22:42:31.802045   18923 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0816 22:42:31.803807   18923 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0816 22:42:31.805603   18923 out.go:177] * Done! kubectl is now configured to use "no-preload-20210816223156-6986" cluster and "default" namespace by default
	I0816 22:42:34.356504   18635 out.go:204]   - Configuring RBAC rules ...
	I0816 22:42:34.810198   18635 cni.go:93] Creating CNI manager for ""
	I0816 22:42:34.810227   18635 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:42:30.576523   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:33.074048   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:35.075110   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:31.178766   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:33.673945   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:35.674516   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:34.812149   18635 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:42:34.812218   18635 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:42:34.823097   18635 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:42:34.840052   18635 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:42:34.840175   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=old-k8s-version-20210816223154-6986 minikube.k8s.io/updated_at=2021_08_16T22_42_34_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:34.840179   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:35.279911   18635 ops.go:34] apiserver oom_adj: 16
	I0816 22:42:35.279930   18635 ops.go:39] adjusting apiserver oom_adj to -10
	I0816 22:42:35.279944   18635 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:42:35.279997   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:35.887807   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:36.388228   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:36.888072   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:37.388131   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:37.888197   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:37.075407   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:39.574205   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:38.169080   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:40.669388   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:38.388192   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:38.887529   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:39.387314   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:39.887397   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:40.388222   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:40.887817   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:41.388165   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:41.887336   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:42.387710   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:42.887452   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:41.575892   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:44.074399   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:43.168677   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:45.674667   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:43.388233   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:43.888191   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:44.388190   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:44.888073   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:45.387300   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:45.887633   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:46.388266   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:46.887918   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:47.387283   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:47.887770   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:46.074552   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:48.573015   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:48.387776   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:48.888189   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:49.388262   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:49.887594   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:50.137803   18635 kubeadm.go:985] duration metric: took 15.297678668s to wait for elevateKubeSystemPrivileges.
	I0816 22:42:50.137838   18635 kubeadm.go:392] StartCluster complete in 5m52.622280434s
	I0816 22:42:50.137865   18635 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:42:50.137996   18635 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:42:50.140032   18635 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:42:50.769953   18635 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20210816223154-6986" rescaled to 1
	I0816 22:42:50.770028   18635 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.94.246 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0816 22:42:50.771768   18635 out.go:177] * Verifying Kubernetes components...
	I0816 22:42:50.771833   18635 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:42:50.770075   18635 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:42:50.770097   18635 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:42:50.770295   18635 config.go:177] Loaded profile config "old-k8s-version-20210816223154-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0816 22:42:50.771981   18635 addons.go:59] Setting default-storageclass=true in profile "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.771981   18635 addons.go:59] Setting storage-provisioner=true in profile "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.771999   18635 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.772004   18635 addons.go:135] Setting addon storage-provisioner=true in "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.771995   18635 addons.go:59] Setting dashboard=true in profile "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.772027   18635 addons.go:135] Setting addon dashboard=true in "old-k8s-version-20210816223154-6986"
	W0816 22:42:50.772039   18635 addons.go:147] addon dashboard should already be in state true
	I0816 22:42:50.772074   18635 host.go:66] Checking if "old-k8s-version-20210816223154-6986" exists ...
	I0816 22:42:50.771981   18635 addons.go:59] Setting metrics-server=true in profile "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.772106   18635 addons.go:135] Setting addon metrics-server=true in "old-k8s-version-20210816223154-6986"
	W0816 22:42:50.772118   18635 addons.go:147] addon metrics-server should already be in state true
	I0816 22:42:50.772143   18635 host.go:66] Checking if "old-k8s-version-20210816223154-6986" exists ...
	W0816 22:42:50.772012   18635 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:42:50.772202   18635 host.go:66] Checking if "old-k8s-version-20210816223154-6986" exists ...
	I0816 22:42:50.772450   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.772491   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.772514   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.772550   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.772562   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.772590   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.772850   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.772907   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.786384   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43919
	I0816 22:42:50.786896   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.787436   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.787463   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.787854   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.788085   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.788330   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36229
	I0816 22:42:50.788749   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.789268   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.789290   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.789622   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.790176   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.790222   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.795830   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42399
	I0816 22:42:50.795865   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0816 22:42:50.796347   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.796355   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.796868   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.796888   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.796872   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.796936   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.797257   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.797329   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.797807   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.797848   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.797871   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.797906   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.799195   18635 addons.go:135] Setting addon default-storageclass=true in "old-k8s-version-20210816223154-6986"
	W0816 22:42:50.799218   18635 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:42:50.799243   18635 host.go:66] Checking if "old-k8s-version-20210816223154-6986" exists ...
	I0816 22:42:50.799640   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.799681   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.810531   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40999
	I0816 22:42:50.811204   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.811785   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.811802   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.812347   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.812540   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.815618   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44099
	I0816 22:42:50.815827   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34305
	I0816 22:42:50.816141   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.816227   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.816697   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.816714   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.816835   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.816854   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.817100   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.817172   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.817189   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .DriverName
	I0816 22:42:50.817352   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.819885   18635 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:42:50.817704   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.820954   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .DriverName
	I0816 22:42:50.821662   18635 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:42:50.821713   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.821719   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:42:50.821731   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:42:50.821750   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHHostname
	I0816 22:42:50.823437   18635 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:42:50.822272   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33579
	I0816 22:42:50.823493   18635 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:42:50.823505   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:42:50.823522   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHHostname
	I0816 22:42:50.823823   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.824293   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.824311   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.824702   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.824895   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.828911   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.828954   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .DriverName
	I0816 22:42:47.677798   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:50.171236   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:50.830871   18635 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:42:50.830990   18635 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:42:50.831003   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:42:50.831019   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHHostname
	I0816 22:42:50.829748   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:23:71", ip: ""} in network mk-old-k8s-version-20210816223154-6986: {Iface:virbr6 ExpiryTime:2021-08-16 23:36:29 +0000 UTC Type:0 Mac:52:54:00:bf:23:71 Iaid: IPaddr:192.168.94.246 Prefix:24 Hostname:old-k8s-version-20210816223154-6986 Clientid:01:52:54:00:bf:23:71}
	I0816 22:42:50.831084   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined IP address 192.168.94.246 and MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.829926   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHPort
	I0816 22:42:50.830586   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.831142   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:23:71", ip: ""} in network mk-old-k8s-version-20210816223154-6986: {Iface:virbr6 ExpiryTime:2021-08-16 23:36:29 +0000 UTC Type:0 Mac:52:54:00:bf:23:71 Iaid: IPaddr:192.168.94.246 Prefix:24 Hostname:old-k8s-version-20210816223154-6986 Clientid:01:52:54:00:bf:23:71}
	I0816 22:42:50.831171   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined IP address 192.168.94.246 and MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.831303   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHKeyPath
	I0816 22:42:50.831452   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHUsername
	I0816 22:42:50.831626   18635 sshutil.go:53] new ssh client: &{IP:192.168.94.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816223154-6986/id_rsa Username:docker}
	I0816 22:42:50.831935   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHPort
	I0816 22:42:50.832101   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHKeyPath
	I0816 22:42:50.832284   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHUsername
	I0816 22:42:50.832496   18635 sshutil.go:53] new ssh client: &{IP:192.168.94.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816223154-6986/id_rsa Username:docker}
	I0816 22:42:50.835565   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34581
	I0816 22:42:50.836045   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.836624   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.836646   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.836952   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.837022   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.837210   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.837385   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:23:71", ip: ""} in network mk-old-k8s-version-20210816223154-6986: {Iface:virbr6 ExpiryTime:2021-08-16 23:36:29 +0000 UTC Type:0 Mac:52:54:00:bf:23:71 Iaid: IPaddr:192.168.94.246 Prefix:24 Hostname:old-k8s-version-20210816223154-6986 Clientid:01:52:54:00:bf:23:71}
	I0816 22:42:50.837420   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined IP address 192.168.94.246 and MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.837596   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHPort
	I0816 22:42:50.837797   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHKeyPath
	I0816 22:42:50.837973   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHUsername
	I0816 22:42:50.838150   18635 sshutil.go:53] new ssh client: &{IP:192.168.94.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816223154-6986/id_rsa Username:docker}
	I0816 22:42:50.839968   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .DriverName
	I0816 22:42:50.840224   18635 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:42:50.840241   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:42:50.840256   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHHostname
	I0816 22:42:50.846248   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.846622   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:23:71", ip: ""} in network mk-old-k8s-version-20210816223154-6986: {Iface:virbr6 ExpiryTime:2021-08-16 23:36:29 +0000 UTC Type:0 Mac:52:54:00:bf:23:71 Iaid: IPaddr:192.168.94.246 Prefix:24 Hostname:old-k8s-version-20210816223154-6986 Clientid:01:52:54:00:bf:23:71}
	I0816 22:42:50.846648   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined IP address 192.168.94.246 and MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.846901   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHPort
	I0816 22:42:50.847072   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHKeyPath
	I0816 22:42:50.847256   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHUsername
	I0816 22:42:50.847384   18635 sshutil.go:53] new ssh client: &{IP:192.168.94.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816223154-6986/id_rsa Username:docker}
	I0816 22:42:51.069324   18635 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20210816223154-6986" to be "Ready" ...
	I0816 22:42:51.069363   18635 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:42:51.074198   18635 node_ready.go:49] node "old-k8s-version-20210816223154-6986" has status "Ready":"True"
	I0816 22:42:51.074219   18635 node_ready.go:38] duration metric: took 4.853226ms waiting for node "old-k8s-version-20210816223154-6986" to be "Ready" ...
	I0816 22:42:51.074228   18635 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:42:51.079427   18635 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:51.095977   18635 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:42:51.095994   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:42:51.114667   18635 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:42:51.127402   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:42:51.127423   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:42:51.139080   18635 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:42:51.142203   18635 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:42:51.142227   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:42:51.184024   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:42:51.184049   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:42:51.229690   18635 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:42:51.229719   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:42:51.258163   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:42:51.258186   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:42:51.292848   18635 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:42:51.348950   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:42:51.348979   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:42:51.432982   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:42:51.433017   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:42:51.500730   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:42:51.500762   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:42:51.566104   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:42:51.566132   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:42:51.669547   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:42:51.669569   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:42:51.755011   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:42:51.755042   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:42:51.807684   18635 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:42:52.571594   18635 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.502197835s)
	I0816 22:42:52.571636   18635 start.go:728] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0816 22:42:52.759651   18635 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.644944376s)
	I0816 22:42:52.759687   18635 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.620572399s)
	I0816 22:42:52.759727   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.759743   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.759751   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.759765   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.760012   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.760058   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:52.760071   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.760080   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.760115   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.760131   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:52.760156   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.760170   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.761684   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.761690   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | Closing plugin on server side
	I0816 22:42:52.761704   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:52.761719   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | Closing plugin on server side
	I0816 22:42:52.761689   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.761794   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:52.761806   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.761817   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.762085   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.762108   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:53.390381   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:53.699731   18635 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.406829667s)
	I0816 22:42:53.699820   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:53.699836   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:53.700202   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:53.700222   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:53.700238   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:53.700249   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:53.700503   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:53.700523   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:53.700538   18635 addons.go:313] Verifying addon metrics-server=true in "old-k8s-version-20210816223154-6986"
	I0816 22:42:54.131359   18635 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.323617191s)
	I0816 22:42:54.131419   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:54.131434   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:54.131720   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | Closing plugin on server side
	I0816 22:42:54.131759   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:54.131767   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:54.131782   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:54.131793   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:54.132029   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:54.132048   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:50.574063   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:53.075372   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:52.670047   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:54.673975   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:54.134079   18635 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0816 22:42:54.134104   18635 addons.go:344] enableAddons completed in 3.364015112s
	I0816 22:42:55.589126   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:57.594328   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:55.581048   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:58.075675   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:57.167077   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:59.670483   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:59.594568   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:02.093248   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:00.574293   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:02.574884   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:05.075277   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:02.159000   18929 pod_ready.go:81] duration metric: took 4m0.401738783s waiting for pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace to be "Ready" ...
	E0816 22:43:02.159021   18929 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:43:02.159049   18929 pod_ready.go:38] duration metric: took 4m41.323642164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:02.159079   18929 kubeadm.go:604] restartCluster took 5m14.823391905s
	W0816 22:43:02.159203   18929 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:43:02.159238   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0816 22:43:05.238090   18929 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.078818721s)
	I0816 22:43:05.238168   18929 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:43:05.256580   18929 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:43:05.256649   18929 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:43:05.300644   18929 cri.go:76] found id: ""
	I0816 22:43:05.300755   18929 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:43:05.308191   18929 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:43:05.315888   18929 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:43:05.315936   18929 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0816 22:43:05.885054   18929 out.go:204]   - Generating certificates and keys ...
	I0816 22:43:04.591211   18635 pod_ready.go:92] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:04.591250   18635 pod_ready.go:81] duration metric: took 13.511789308s waiting for pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:04.591266   18635 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jmg6d" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:04.598816   18635 pod_ready.go:92] pod "kube-proxy-jmg6d" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:04.598833   18635 pod_ready.go:81] duration metric: took 7.559474ms waiting for pod "kube-proxy-jmg6d" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:04.598842   18635 pod_ready.go:38] duration metric: took 13.524600915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:04.598861   18635 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:43:04.598908   18635 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:43:04.615708   18635 api_server.go:70] duration metric: took 13.845635855s to wait for apiserver process to appear ...
	I0816 22:43:04.615739   18635 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:43:04.615748   18635 api_server.go:239] Checking apiserver healthz at https://192.168.94.246:8443/healthz ...
	I0816 22:43:04.624860   18635 api_server.go:265] https://192.168.94.246:8443/healthz returned 200:
	ok
	I0816 22:43:04.626456   18635 api_server.go:139] control plane version: v1.14.0
	I0816 22:43:04.626478   18635 api_server.go:129] duration metric: took 10.733471ms to wait for apiserver health ...
	I0816 22:43:04.626487   18635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:43:04.631832   18635 system_pods.go:59] 4 kube-system pods found
	I0816 22:43:04.631861   18635 system_pods.go:61] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.631867   18635 system_pods.go:61] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.631877   18635 system_pods.go:61] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:04.631883   18635 system_pods.go:61] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.631892   18635 system_pods.go:74] duration metric: took 5.399191ms to wait for pod list to return data ...
	I0816 22:43:04.631901   18635 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:43:04.635992   18635 default_sa.go:45] found service account: "default"
	I0816 22:43:04.636015   18635 default_sa.go:55] duration metric: took 4.107562ms for default service account to be created ...
	I0816 22:43:04.636025   18635 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:43:04.640667   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:04.640691   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.640697   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.640704   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:04.640709   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.640726   18635 retry.go:31] will retry after 305.063636ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:04.951327   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:04.951357   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.951365   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.951377   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:04.951384   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.951402   18635 retry.go:31] will retry after 338.212508ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:05.295109   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:05.295143   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.295154   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.295165   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:05.295174   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.295193   18635 retry.go:31] will retry after 378.459802ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:05.683391   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:05.683423   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.683431   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.683442   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:05.683452   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.683472   18635 retry.go:31] will retry after 469.882201ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:06.158721   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:06.158752   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.158757   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.158765   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:06.158770   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.158786   18635 retry.go:31] will retry after 667.365439ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:06.831740   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:06.831771   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.831781   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.831790   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:06.831799   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.831818   18635 retry.go:31] will retry after 597.243124ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:07.434457   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:07.434482   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:07.434487   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:07.434494   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:07.434499   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:07.434513   18635 retry.go:31] will retry after 789.889932ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:07.075753   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:09.575726   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:06.996973   18929 out.go:204]   - Booting up control plane ...
	I0816 22:43:08.229786   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:08.229819   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:08.229827   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:08.229840   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:08.229845   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:08.229863   18635 retry.go:31] will retry after 951.868007ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:09.187817   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:09.187852   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:09.187862   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:09.187873   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:09.187878   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:09.187895   18635 retry.go:31] will retry after 1.341783893s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:10.534567   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:10.534608   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:10.534615   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:10.534627   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:10.534634   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:10.534652   18635 retry.go:31] will retry after 1.876813009s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:12.418546   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:12.418572   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:12.418579   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:12.418590   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:12.418596   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:12.418612   18635 retry.go:31] will retry after 2.6934314s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:11.066632   19204 pod_ready.go:81] duration metric: took 4m0.008014176s waiting for pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace to be "Ready" ...
	E0816 22:43:11.066660   19204 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:43:11.066679   19204 pod_ready.go:38] duration metric: took 4m27.623084704s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:11.066704   19204 kubeadm.go:604] restartCluster took 5m3.415779611s
	W0816 22:43:11.066819   19204 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:43:11.066856   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0816 22:43:14.269873   19204 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.202987817s)
	I0816 22:43:14.269950   19204 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:43:14.288386   19204 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:43:14.288469   19204 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:43:14.333856   19204 cri.go:76] found id: ""
	I0816 22:43:14.333935   19204 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:43:14.343737   19204 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:43:14.352599   19204 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:43:14.352646   19204 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0816 22:43:14.930093   19204 out.go:204]   - Generating certificates and keys ...
	I0816 22:43:15.118830   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:15.118862   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:15.118872   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:15.118882   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:15.118889   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:15.118907   18635 retry.go:31] will retry after 2.494582248s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:17.619339   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:17.619375   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:17.619384   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:17.619395   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:17.619403   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:17.619422   18635 retry.go:31] will retry after 3.420895489s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:15.729873   19204 out.go:204]   - Booting up control plane ...
	I0816 22:43:21.047237   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:21.047269   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:21.047276   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:21.047287   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:21.047294   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:21.047310   18635 retry.go:31] will retry after 4.133785681s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:22.636356   18929 out.go:204]   - Configuring RBAC rules ...
	I0816 22:43:23.371015   18929 cni.go:93] Creating CNI manager for ""
	I0816 22:43:23.371043   18929 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:43:23.373006   18929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:43:23.373076   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:43:23.386712   18929 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:43:23.415554   18929 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:43:23.415693   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:23.415773   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=embed-certs-20210816223333-6986 minikube.k8s.io/updated_at=2021_08_16T22_43_23_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:24.042222   18929 ops.go:34] apiserver oom_adj: -16
	I0816 22:43:24.042207   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:24.699493   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:25.199877   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:25.699926   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:25.189718   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:25.189751   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:25.189758   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:25.189768   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:25.189775   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:25.189795   18635 retry.go:31] will retry after 5.595921491s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:26.199444   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:26.699547   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:27.199378   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:27.699869   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:28.200370   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:28.700011   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:29.199882   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:29.700066   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:30.200161   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:30.699359   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:31.887219   19204 out.go:204]   - Configuring RBAC rules ...
	I0816 22:43:32.571790   19204 cni.go:93] Creating CNI manager for ""
	I0816 22:43:32.571817   19204 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:43:30.804838   18635 system_pods.go:86] 8 kube-system pods found
	I0816 22:43:30.804876   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:30.804884   18635 system_pods.go:89] "etcd-old-k8s-version-20210816223154-6986" [61433b17-fee3-11eb-bea8-525400bf2371] Pending
	I0816 22:43:30.804891   18635 system_pods.go:89] "kube-apiserver-old-k8s-version-20210816223154-6986" [5e48aade-fee3-11eb-bea8-525400bf2371] Pending
	I0816 22:43:30.804897   18635 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210816223154-6986" [5e48d2c6-fee3-11eb-bea8-525400bf2371] Pending
	I0816 22:43:30.804902   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:30.804908   18635 system_pods.go:89] "kube-scheduler-old-k8s-version-20210816223154-6986" [60110a1b-fee3-11eb-bea8-525400bf2371] Pending
	I0816 22:43:30.804918   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:30.804925   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:30.804943   18635 retry.go:31] will retry after 6.3346098s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:32.573869   19204 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:43:32.573957   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:43:32.585155   19204 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:43:32.601590   19204 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:43:32.601652   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:32.601677   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=default-k8s-different-port-20210816223418-6986 minikube.k8s.io/updated_at=2021_08_16T22_43_32_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:32.631177   19204 ops.go:34] apiserver oom_adj: -16
	I0816 22:43:33.115780   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:33.764597   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:34.265250   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:34.764717   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:31.200176   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:31.700178   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:32.200029   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:32.699789   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:33.200341   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:33.699709   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:34.199959   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:34.699635   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:35.199401   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:35.497436   18929 kubeadm.go:985] duration metric: took 12.081799779s to wait for elevateKubeSystemPrivileges.
	I0816 22:43:35.497485   18929 kubeadm.go:392] StartCluster complete in 5m48.214136187s
	I0816 22:43:35.497508   18929 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:43:35.497637   18929 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:43:35.500294   18929 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:43:36.034903   18929 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20210816223333-6986" rescaled to 1
	I0816 22:43:36.034983   18929 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.105.129 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:43:36.036731   18929 out.go:177] * Verifying Kubernetes components...
	I0816 22:43:36.035020   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:43:36.036813   18929 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:43:36.035043   18929 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:43:36.036910   18929 addons.go:59] Setting storage-provisioner=true in profile "embed-certs-20210816223333-6986"
	I0816 22:43:36.036926   18929 addons.go:59] Setting default-storageclass=true in profile "embed-certs-20210816223333-6986"
	I0816 22:43:36.036937   18929 addons.go:135] Setting addon storage-provisioner=true in "embed-certs-20210816223333-6986"
	I0816 22:43:36.036942   18929 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20210816223333-6986"
	W0816 22:43:36.036948   18929 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:43:36.036978   18929 host.go:66] Checking if "embed-certs-20210816223333-6986" exists ...
	I0816 22:43:36.037395   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.037430   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.037443   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.037445   18929 addons.go:59] Setting metrics-server=true in profile "embed-certs-20210816223333-6986"
	I0816 22:43:36.037462   18929 addons.go:135] Setting addon metrics-server=true in "embed-certs-20210816223333-6986"
	I0816 22:43:36.037464   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	W0816 22:43:36.037471   18929 addons.go:147] addon metrics-server should already be in state true
	I0816 22:43:36.036912   18929 addons.go:59] Setting dashboard=true in profile "embed-certs-20210816223333-6986"
	I0816 22:43:36.037504   18929 addons.go:135] Setting addon dashboard=true in "embed-certs-20210816223333-6986"
	W0816 22:43:36.037509   18929 addons.go:147] addon dashboard should already be in state true
	I0816 22:43:36.035195   18929 config.go:177] Loaded profile config "embed-certs-20210816223333-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:43:36.037546   18929 host.go:66] Checking if "embed-certs-20210816223333-6986" exists ...
	I0816 22:43:36.037680   18929 host.go:66] Checking if "embed-certs-20210816223333-6986" exists ...
	I0816 22:43:36.037934   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.037965   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.038094   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.038128   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.052922   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45791
	I0816 22:43:36.053393   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.053967   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.053996   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.054376   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.054999   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.055044   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.057606   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34275
	I0816 22:43:36.057965   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.058476   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.058504   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.058889   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.059518   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.059555   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.061564   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35827
	I0816 22:43:36.061953   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.062427   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.062448   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.062776   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.062919   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetState
	I0816 22:43:36.067479   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35939
	I0816 22:43:36.067916   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.068397   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.068420   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.068756   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.069319   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.069365   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.070906   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46287
	I0816 22:43:36.071487   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.071940   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.071962   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.072029   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43295
	I0816 22:43:36.072345   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.072346   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.072513   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetState
	I0816 22:43:36.072847   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.072869   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.073161   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.073332   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetState
	I0816 22:43:36.077207   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:43:36.077344   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:43:36.079180   18929 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:43:36.080548   18929 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:43:36.079295   18929 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:43:36.080582   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:43:36.080603   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:43:36.081867   18929 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:43:36.081926   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:43:36.081938   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:43:36.081954   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:43:36.082858   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37409
	I0816 22:43:36.083299   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.083845   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.083868   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.084213   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.084387   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetState
	I0816 22:43:36.086977   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.087634   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:43:36.087699   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:43:36.087722   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.087759   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:43:36.087803   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:43:36.087949   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:43:37.147660   18635 system_pods.go:86] 8 kube-system pods found
	I0816 22:43:37.147701   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:37.147710   18635 system_pods.go:89] "etcd-old-k8s-version-20210816223154-6986" [61433b17-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:37.147718   18635 system_pods.go:89] "kube-apiserver-old-k8s-version-20210816223154-6986" [5e48aade-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:37.147724   18635 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210816223154-6986" [5e48d2c6-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:37.147730   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:37.147736   18635 system_pods.go:89] "kube-scheduler-old-k8s-version-20210816223154-6986" [60110a1b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:37.147745   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:37.147755   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:37.147764   18635 system_pods.go:126] duration metric: took 32.511733609s to wait for k8s-apps to be running ...
	I0816 22:43:37.147783   18635 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:43:37.147836   18635 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:43:37.164370   18635 system_svc.go:56] duration metric: took 16.579311ms WaitForService to wait for kubelet.
	I0816 22:43:37.164403   18635 kubeadm.go:547] duration metric: took 46.394336574s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:43:37.164433   18635 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:43:37.168097   18635 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:43:37.168129   18635 node_conditions.go:123] node cpu capacity is 2
	I0816 22:43:37.168144   18635 node_conditions.go:105] duration metric: took 3.70586ms to run NodePressure ...
	I0816 22:43:37.168156   18635 start.go:231] waiting for startup goroutines ...
	I0816 22:43:37.217144   18635 start.go:462] kubectl: 1.20.5, cluster: 1.14.0 (minor skew: 6)
	I0816 22:43:37.219305   18635 out.go:177] 
	W0816 22:43:37.219480   18635 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.14.0.
	I0816 22:43:37.221278   18635 out.go:177]   - Want kubectl v1.14.0? Try 'minikube kubectl -- get pods -A'
	I0816 22:43:37.223010   18635 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-20210816223154-6986" cluster and "default" namespace by default
	I0816 22:43:35.265455   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:35.765450   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:36.264605   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:36.764601   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:37.265049   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:37.764595   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:38.265287   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:38.764994   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:39.265056   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:39.765476   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:36.089340   18929 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:43:36.089400   18929 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:43:36.089413   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:43:36.088130   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:43:36.089430   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:43:36.088890   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.089473   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:43:36.089505   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.089703   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:43:36.089898   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:43:36.090090   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:43:36.090267   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:43:36.094836   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.095297   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:43:36.095323   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.095512   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:43:36.095645   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:43:36.095759   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:43:36.095851   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:43:36.098057   18929 addons.go:135] Setting addon default-storageclass=true in "embed-certs-20210816223333-6986"
	W0816 22:43:36.098079   18929 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:43:36.098104   18929 host.go:66] Checking if "embed-certs-20210816223333-6986" exists ...
	I0816 22:43:36.098559   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.098603   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.109741   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39575
	I0816 22:43:36.110180   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.110794   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.110819   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.111190   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.111821   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.111864   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.123621   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45183
	I0816 22:43:36.124053   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.124503   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.124519   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.124829   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.125022   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetState
	I0816 22:43:36.128253   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:43:36.128476   18929 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:43:36.128494   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:43:36.128513   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:43:36.134156   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.134493   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:43:36.134521   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.134626   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:43:36.134834   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:43:36.135010   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:43:36.135176   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:43:36.334796   18929 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:43:36.462564   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:43:36.462619   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:43:36.510558   18929 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:43:36.513334   18929 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:43:36.513356   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:43:36.551208   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:43:36.551256   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:43:36.570189   18929 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:43:36.570216   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:43:36.657218   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:43:36.657250   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:43:36.692197   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:43:36.692227   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:43:36.774111   18929 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20210816223333-6986" to be "Ready" ...
	I0816 22:43:36.774340   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:43:36.790295   18929 node_ready.go:49] node "embed-certs-20210816223333-6986" has status "Ready":"True"
	I0816 22:43:36.790320   18929 node_ready.go:38] duration metric: took 16.177495ms waiting for node "embed-certs-20210816223333-6986" to be "Ready" ...
	I0816 22:43:36.790335   18929 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:36.797297   18929 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-6zv97" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:36.858095   18929 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:43:36.858120   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:43:36.981263   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:43:36.981292   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:43:37.007726   18929 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:43:37.229172   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:43:37.229198   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:43:37.412428   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:43:37.412464   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:43:37.604490   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:43:37.604516   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:43:37.864046   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:43:37.864072   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:43:37.954509   18929 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:43:38.628148   18929 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.293312909s)
	I0816 22:43:38.628197   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:38.628206   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:38.628466   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:38.628488   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:38.628499   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:38.628509   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:38.628847   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:38.628869   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:38.797491   18929 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.286893282s)
	I0816 22:43:38.797551   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:38.797565   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:38.797846   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Closing plugin on server side
	I0816 22:43:38.797888   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:38.797896   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:38.797904   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:38.797913   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:38.798184   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:38.798203   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:38.798216   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:38.798226   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:38.798467   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:38.798483   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:38.814757   18929 pod_ready.go:102] pod "coredns-558bd4d5db-6zv97" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:39.223137   18929 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.448766478s)
	I0816 22:43:39.223172   18929 start.go:728] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS
	I0816 22:43:39.504206   18929 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.496431754s)
	I0816 22:43:39.504273   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:39.504297   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:39.504564   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:39.504585   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:39.504598   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:39.504611   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:39.504854   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:39.504863   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Closing plugin on server side
	I0816 22:43:39.504875   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:39.504890   18929 addons.go:313] Verifying addon metrics-server=true in "embed-certs-20210816223333-6986"
	I0816 22:43:39.815632   18929 pod_ready.go:97] error getting pod "coredns-558bd4d5db-6zv97" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-6zv97" not found
	I0816 22:43:39.815668   18929 pod_ready.go:81] duration metric: took 3.018337051s waiting for pod "coredns-558bd4d5db-6zv97" in "kube-system" namespace to be "Ready" ...
	E0816 22:43:39.815681   18929 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-6zv97" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-6zv97" not found
	I0816 22:43:39.815691   18929 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-mfshm" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:40.809470   18929 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.854902802s)
	I0816 22:43:40.809543   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:40.809566   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:40.811279   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:40.811299   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:40.811310   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:40.811320   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:40.811328   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Closing plugin on server side
	I0816 22:43:40.811538   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:40.811553   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:40.811561   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Closing plugin on server side
	I0816 22:43:40.813830   18929 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0816 22:43:40.813854   18929 addons.go:344] enableAddons completed in 4.778818205s
	I0816 22:43:41.867317   18929 pod_ready.go:102] pod "coredns-558bd4d5db-mfshm" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:43.368862   18929 pod_ready.go:92] pod "coredns-558bd4d5db-mfshm" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:43.368890   18929 pod_ready.go:81] duration metric: took 3.553191611s waiting for pod "coredns-558bd4d5db-mfshm" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.368903   18929 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.378704   18929 pod_ready.go:92] pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:43.378725   18929 pod_ready.go:81] duration metric: took 9.814161ms waiting for pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.378739   18929 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.402730   18929 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:43.402755   18929 pod_ready.go:81] duration metric: took 24.005322ms waiting for pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.402769   18929 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.411087   18929 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:43.411108   18929 pod_ready.go:81] duration metric: took 8.330836ms waiting for pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.411120   18929 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zwcwz" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.420161   18929 pod_ready.go:92] pod "kube-proxy-zwcwz" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:43.420183   18929 pod_ready.go:81] duration metric: took 9.054321ms waiting for pod "kube-proxy-zwcwz" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.420195   18929 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.764290   18929 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:43.764315   18929 pod_ready.go:81] duration metric: took 344.109074ms waiting for pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.764327   18929 pod_ready.go:38] duration metric: took 6.973978865s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:43.764347   18929 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:43:43.764398   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:43:43.785185   18929 api_server.go:70] duration metric: took 7.750163085s to wait for apiserver process to appear ...
	I0816 22:43:43.785212   18929 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:43:43.785222   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:43:43.795735   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 200:
	ok
	I0816 22:43:43.797225   18929 api_server.go:139] control plane version: v1.21.3
	I0816 22:43:43.797243   18929 api_server.go:129] duration metric: took 12.025112ms to wait for apiserver health ...
	I0816 22:43:43.797252   18929 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:43:43.971546   18929 system_pods.go:59] 8 kube-system pods found
	I0816 22:43:43.971578   18929 system_pods.go:61] "coredns-558bd4d5db-mfshm" [cb9ac226-b63f-4de1-b4af-b8e2bf280d95] Running
	I0816 22:43:43.971584   18929 system_pods.go:61] "etcd-embed-certs-20210816223333-6986" [333a4b44-c417-46e6-8653-c1d24391c7ca] Running
	I0816 22:43:43.971590   18929 system_pods.go:61] "kube-apiserver-embed-certs-20210816223333-6986" [414c58e9-8dcf-4f0c-9a5e-ff21a694067d] Running
	I0816 22:43:43.971596   18929 system_pods.go:61] "kube-controller-manager-embed-certs-20210816223333-6986" [c80d067f-ee6a-4e6a-b062-c2ff64c6bd81] Running
	I0816 22:43:43.971601   18929 system_pods.go:61] "kube-proxy-zwcwz" [f85562a3-8576-4dbf-a2b2-3f6a3d199df3] Running
	I0816 22:43:43.971608   18929 system_pods.go:61] "kube-scheduler-embed-certs-20210816223333-6986" [92b9b318-e6e4-4891-9609-5fe26593bcdb] Running
	I0816 22:43:43.971621   18929 system_pods.go:61] "metrics-server-7c784ccb57-qfrpw" [abb75357-7b33-4327-aa7f-8e9c15a192f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:43.971632   18929 system_pods.go:61] "storage-provisioner" [f3fc0038-f88e-416f-81e3-fb387b0e010a] Running
	I0816 22:43:43.971639   18929 system_pods.go:74] duration metric: took 174.380965ms to wait for pod list to return data ...
	I0816 22:43:43.971647   18929 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:43:44.164541   18929 default_sa.go:45] found service account: "default"
	I0816 22:43:44.164564   18929 default_sa.go:55] duration metric: took 192.910888ms for default service account to be created ...
	I0816 22:43:44.164584   18929 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:43:44.367138   18929 system_pods.go:86] 8 kube-system pods found
	I0816 22:43:44.367172   18929 system_pods.go:89] "coredns-558bd4d5db-mfshm" [cb9ac226-b63f-4de1-b4af-b8e2bf280d95] Running
	I0816 22:43:44.367181   18929 system_pods.go:89] "etcd-embed-certs-20210816223333-6986" [333a4b44-c417-46e6-8653-c1d24391c7ca] Running
	I0816 22:43:44.367190   18929 system_pods.go:89] "kube-apiserver-embed-certs-20210816223333-6986" [414c58e9-8dcf-4f0c-9a5e-ff21a694067d] Running
	I0816 22:43:44.367197   18929 system_pods.go:89] "kube-controller-manager-embed-certs-20210816223333-6986" [c80d067f-ee6a-4e6a-b062-c2ff64c6bd81] Running
	I0816 22:43:44.367204   18929 system_pods.go:89] "kube-proxy-zwcwz" [f85562a3-8576-4dbf-a2b2-3f6a3d199df3] Running
	I0816 22:43:44.367211   18929 system_pods.go:89] "kube-scheduler-embed-certs-20210816223333-6986" [92b9b318-e6e4-4891-9609-5fe26593bcdb] Running
	I0816 22:43:44.367229   18929 system_pods.go:89] "metrics-server-7c784ccb57-qfrpw" [abb75357-7b33-4327-aa7f-8e9c15a192f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:44.367239   18929 system_pods.go:89] "storage-provisioner" [f3fc0038-f88e-416f-81e3-fb387b0e010a] Running
	I0816 22:43:44.367248   18929 system_pods.go:126] duration metric: took 202.65882ms to wait for k8s-apps to be running ...
	I0816 22:43:44.367259   18929 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:43:44.367307   18929 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:43:44.381654   18929 system_svc.go:56] duration metric: took 14.389765ms WaitForService to wait for kubelet.
	I0816 22:43:44.381678   18929 kubeadm.go:547] duration metric: took 8.346663342s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:43:44.381702   18929 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:43:44.563414   18929 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:43:44.563447   18929 node_conditions.go:123] node cpu capacity is 2
	I0816 22:43:44.563461   18929 node_conditions.go:105] duration metric: took 181.753579ms to run NodePressure ...
	I0816 22:43:44.563473   18929 start.go:231] waiting for startup goroutines ...
	I0816 22:43:44.614237   18929 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:43:44.616600   18929 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210816223333-6986" cluster and "default" namespace by default
	I0816 22:43:40.264690   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:40.765483   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:41.264614   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:41.764581   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:42.265395   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:42.764674   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:43.265319   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:43.765315   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:44.265020   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:44.764726   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:45.265506   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:45.590495   19204 kubeadm.go:985] duration metric: took 12.988891092s to wait for elevateKubeSystemPrivileges.
	I0816 22:43:45.590529   19204 kubeadm.go:392] StartCluster complete in 5m37.979340771s
	I0816 22:43:45.590548   19204 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:43:45.590642   19204 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:43:45.593541   19204 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0816 22:43:45.657324   19204 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0816 22:43:46.665400   19204 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210816223418-6986" rescaled to 1
	I0816 22:43:46.665482   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:43:46.665515   19204 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:43:46.667711   19204 out.go:177] * Verifying Kubernetes components...
	I0816 22:43:46.667773   19204 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:43:46.665580   19204 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:43:46.667837   19204 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210816223418-6986"
	I0816 22:43:46.667852   19204 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210816223418-6986"
	I0816 22:43:46.667860   19204 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210816223418-6986"
	I0816 22:43:46.667868   19204 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210816223418-6986"
	I0816 22:43:46.667838   19204 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210816223418-6986"
	I0816 22:43:46.667885   19204 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210816223418-6986"
	W0816 22:43:46.667894   19204 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:43:46.667927   19204 host.go:66] Checking if "default-k8s-different-port-20210816223418-6986" exists ...
	I0816 22:43:46.668329   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.668351   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.668368   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.668386   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.665780   19204 config.go:177] Loaded profile config "default-k8s-different-port-20210816223418-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:43:46.668451   19204 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210816223418-6986"
	W0816 22:43:46.667870   19204 addons.go:147] addon dashboard should already be in state true
	I0816 22:43:46.668473   19204 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210816223418-6986"
	W0816 22:43:46.668483   19204 addons.go:147] addon metrics-server should already be in state true
	I0816 22:43:46.668492   19204 host.go:66] Checking if "default-k8s-different-port-20210816223418-6986" exists ...
	I0816 22:43:46.668520   19204 host.go:66] Checking if "default-k8s-different-port-20210816223418-6986" exists ...
	I0816 22:43:46.668864   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.668905   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.668950   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.668990   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.689974   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42521
	I0816 22:43:46.690669   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.691280   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.691314   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.691679   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.692276   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.692315   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.692464   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43883
	I0816 22:43:46.693031   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.693526   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.693553   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.693968   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.694137   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetState
	I0816 22:43:46.705753   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38281
	I0816 22:43:46.706172   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.707061   19204 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210816223418-6986"
	W0816 22:43:46.707082   19204 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:43:46.707108   19204 host.go:66] Checking if "default-k8s-different-port-20210816223418-6986" exists ...
	I0816 22:43:46.707465   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.707503   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.707516   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.707545   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.707576   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44259
	I0816 22:43:46.707845   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.707927   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.708047   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetState
	I0816 22:43:46.708442   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.708498   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.708850   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40817
	I0816 22:43:46.708875   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.709295   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.709795   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.709831   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.709802   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.709896   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.710319   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.710841   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.710885   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.712390   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:43:46.714605   19204 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:43:46.714712   19204 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:43:46.714728   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:43:46.714749   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:43:46.721156   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.721380   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:43:46.721409   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.721629   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:43:46.721735   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:43:46.721864   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:43:46.721924   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:43:46.729886   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41981
	I0816 22:43:46.730281   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.730709   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.730725   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.731239   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.731805   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.731916   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.731997   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0816 22:43:46.732368   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.732825   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.732847   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.733209   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.733449   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetState
	I0816 22:43:46.734015   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:32775
	I0816 22:43:46.734430   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.735096   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.735146   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.735539   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.735710   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetState
	I0816 22:43:46.737120   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:43:46.739055   19204 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:43:46.738848   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:43:46.739120   19204 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:43:46.739134   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:43:46.739158   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:43:46.740784   19204 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:43:46.742206   19204 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:43:46.742257   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:43:46.742270   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:43:46.742288   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:43:46.745626   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.746290   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:43:46.746384   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.746885   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:43:46.747264   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41917
	I0816 22:43:46.747662   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.748053   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.748065   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.748398   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.748516   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetState
	I0816 22:43:46.748635   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.749011   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:43:46.749029   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.749196   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:43:46.749309   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:43:46.749445   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:43:46.749576   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:43:46.751724   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:43:46.751878   19204 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:43:46.751885   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:43:46.751895   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:43:46.755264   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:43:46.755420   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:43:46.755543   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:43:46.757535   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.757844   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:43:46.757880   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.757947   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:43:46.758111   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:43:46.758232   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:43:46.758336   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:43:46.912338   19204 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:43:46.928084   19204 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210816223418-6986" to be "Ready" ...
	I0816 22:43:46.928162   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:43:46.932654   19204 node_ready.go:49] node "default-k8s-different-port-20210816223418-6986" has status "Ready":"True"
	I0816 22:43:46.932677   19204 node_ready.go:38] duration metric: took 4.560299ms waiting for node "default-k8s-different-port-20210816223418-6986" to be "Ready" ...
	I0816 22:43:46.932688   19204 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:46.938801   19204 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-jvhn9" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:46.959212   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:43:46.959239   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:43:46.980444   19204 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:43:46.992693   19204 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:43:46.992712   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:43:47.139897   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:43:47.140481   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:43:47.283513   19204 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:43:47.283548   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:43:47.307099   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:43:47.307124   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:43:47.337466   19204 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:43:47.337491   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:43:47.400423   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:43:47.400457   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:43:47.428735   19204 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:43:47.473437   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:43:47.473470   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:43:47.809043   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:43:47.809076   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:43:48.151719   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:43:48.151750   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:43:48.433383   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:43:48.433418   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:43:48.581909   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:43:48.581937   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:43:48.707807   19204 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:43:48.976179   19204 pod_ready.go:102] pod "coredns-558bd4d5db-jvhn9" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:49.433748   19204 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.505548681s)
	I0816 22:43:49.433787   19204 start.go:728] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS
	I0816 22:43:49.434692   19204 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.522311056s)
	I0816 22:43:49.434732   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:49.434747   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:49.435098   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:49.435119   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:49.435131   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:49.435132   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Closing plugin on server side
	I0816 22:43:49.435143   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:49.435401   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:49.435415   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:49.725705   19204 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.745219751s)
	I0816 22:43:49.725764   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:49.725779   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:49.726085   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:49.726107   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:49.726124   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:49.726137   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:49.726388   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:49.726414   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:49.726427   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:49.726428   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Closing plugin on server side
	I0816 22:43:49.726440   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:49.726685   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Closing plugin on server side
	I0816 22:43:49.726730   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:49.726743   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:50.084265   19204 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.655473327s)
	I0816 22:43:50.084320   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:50.084336   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:50.084638   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:50.084661   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:50.084671   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:50.084682   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:50.084904   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:50.084916   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:50.084935   19204 addons.go:313] Verifying addon metrics-server=true in "default-k8s-different-port-20210816223418-6986"
	I0816 22:43:51.000374   19204 pod_ready.go:92] pod "coredns-558bd4d5db-jvhn9" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:51.000409   19204 pod_ready.go:81] duration metric: took 4.061576094s waiting for pod "coredns-558bd4d5db-jvhn9" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.000426   19204 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.042067   19204 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:51.042087   19204 pod_ready.go:81] duration metric: took 41.651304ms waiting for pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.042101   19204 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.076320   19204 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.368445421s)
	I0816 22:43:51.076371   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:51.076392   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:51.076636   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:51.076655   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:51.076666   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:51.076676   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:51.076961   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:51.076973   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Closing plugin on server side
	I0816 22:43:51.076983   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:51.078741   19204 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0816 22:43:51.078768   19204 addons.go:344] enableAddons completed in 4.413194371s
	I0816 22:43:51.095847   19204 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:51.095868   19204 pod_ready.go:81] duration metric: took 53.758678ms waiting for pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.095885   19204 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.120117   19204 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:51.120136   19204 pod_ready.go:81] duration metric: took 24.240957ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.120151   19204 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qhsq8" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.137975   19204 pod_ready.go:92] pod "kube-proxy-qhsq8" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:51.138000   19204 pod_ready.go:81] duration metric: took 17.840798ms waiting for pod "kube-proxy-qhsq8" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.138013   19204 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.361490   19204 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:51.361513   19204 pod_ready.go:81] duration metric: took 223.49089ms waiting for pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.361522   19204 pod_ready.go:38] duration metric: took 4.428821843s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:51.361535   19204 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:43:51.361593   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:43:51.390742   19204 api_server.go:70] duration metric: took 4.724914292s to wait for apiserver process to appear ...
	I0816 22:43:51.390767   19204 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:43:51.390777   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:43:51.398481   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 200:
	ok
	I0816 22:43:51.402341   19204 api_server.go:139] control plane version: v1.21.3
	I0816 22:43:51.402366   19204 api_server.go:129] duration metric: took 11.590514ms to wait for apiserver health ...
	I0816 22:43:51.402376   19204 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:43:51.553058   19204 system_pods.go:59] 8 kube-system pods found
	I0816 22:43:51.553092   19204 system_pods.go:61] "coredns-558bd4d5db-jvhn9" [3c48c2dc-4beb-4359-aadc-1365db48feac] Running
	I0816 22:43:51.553102   19204 system_pods.go:61] "etcd-default-k8s-different-port-20210816223418-6986" [1ec44a23-d678-413f-bc79-1b3b24c77422] Running
	I0816 22:43:51.553109   19204 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210816223418-6986" [9246fbb2-2bd6-42a5-ad37-66c828343f50] Running
	I0816 22:43:51.553116   19204 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210816223418-6986" [974dfb9a-e4b0-4aee-862f-6b0b06f6491e] Running
	I0816 22:43:51.553122   19204 system_pods.go:61] "kube-proxy-qhsq8" [9abb9351-b721-48bb-94b9-887b5afc7584] Running
	I0816 22:43:51.553128   19204 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210816223418-6986" [b73b2730-6367-4d16-90c7-4ba6ec17f6ef] Running
	I0816 22:43:51.553142   19204 system_pods.go:61] "metrics-server-7c784ccb57-pbxnr" [fa2d27a5-b243-4a8f-9450-b834d1ce5bb0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:51.553155   19204 system_pods.go:61] "storage-provisioner" [a88a523b-5707-46b9-b7cf-6931db0d4487] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:43:51.553166   19204 system_pods.go:74] duration metric: took 150.783692ms to wait for pod list to return data ...
	I0816 22:43:51.553177   19204 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:43:51.749364   19204 default_sa.go:45] found service account: "default"
	I0816 22:43:51.749393   19204 default_sa.go:55] duration metric: took 196.209447ms for default service account to be created ...
	I0816 22:43:51.749405   19204 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:43:51.953876   19204 system_pods.go:86] 8 kube-system pods found
	I0816 22:43:51.953914   19204 system_pods.go:89] "coredns-558bd4d5db-jvhn9" [3c48c2dc-4beb-4359-aadc-1365db48feac] Running
	I0816 22:43:51.953923   19204 system_pods.go:89] "etcd-default-k8s-different-port-20210816223418-6986" [1ec44a23-d678-413f-bc79-1b3b24c77422] Running
	I0816 22:43:51.953931   19204 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20210816223418-6986" [9246fbb2-2bd6-42a5-ad37-66c828343f50] Running
	I0816 22:43:51.953938   19204 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20210816223418-6986" [974dfb9a-e4b0-4aee-862f-6b0b06f6491e] Running
	I0816 22:43:51.953949   19204 system_pods.go:89] "kube-proxy-qhsq8" [9abb9351-b721-48bb-94b9-887b5afc7584] Running
	I0816 22:43:51.953958   19204 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20210816223418-6986" [b73b2730-6367-4d16-90c7-4ba6ec17f6ef] Running
	I0816 22:43:51.953971   19204 system_pods.go:89] "metrics-server-7c784ccb57-pbxnr" [fa2d27a5-b243-4a8f-9450-b834d1ce5bb0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:51.953985   19204 system_pods.go:89] "storage-provisioner" [a88a523b-5707-46b9-b7cf-6931db0d4487] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:43:51.954000   19204 system_pods.go:126] duration metric: took 204.589729ms to wait for k8s-apps to be running ...
	I0816 22:43:51.954014   19204 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:43:51.954066   19204 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:43:51.982620   19204 system_svc.go:56] duration metric: took 28.600519ms WaitForService to wait for kubelet.
	I0816 22:43:51.982645   19204 kubeadm.go:547] duration metric: took 5.316821186s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:43:51.982666   19204 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:43:52.146042   19204 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:43:52.146082   19204 node_conditions.go:123] node cpu capacity is 2
	I0816 22:43:52.146096   19204 node_conditions.go:105] duration metric: took 163.423737ms to run NodePressure ...
	I0816 22:43:52.146108   19204 start.go:231] waiting for startup goroutines ...
	I0816 22:43:52.193059   19204 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:43:52.195545   19204 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20210816223418-6986" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
	1c2b60d03fede       523cad1a4df73       22 seconds ago      Exited              dashboard-metrics-scraper   1                   cc84c176c65f2
	654de8c4789a8       9a07b5b4bfac0       30 seconds ago      Running             kubernetes-dashboard        0                   bba06897412e6
	14afa3ade101a       6e38f40d628db       31 seconds ago      Running             storage-provisioner         0                   762745540814d
	acfa6acd62251       296a6d5035e2d       34 seconds ago      Running             coredns                     0                   73de0351bc3ba
	d2b4592df27b1       adb2816ea823a       36 seconds ago      Running             kube-proxy                  0                   b72889a5e9646
	5252b10be7bb7       6be0dc1302e30       58 seconds ago      Running             kube-scheduler              0                   3b689050c0847
	6d8fa8177082e       3d174f00aa39e       58 seconds ago      Running             kube-apiserver              0                   1cd3b203f0f52
	00de61027a57c       bc2bb319a7038       58 seconds ago      Running             kube-controller-manager     0                   3a8fef0311637
	3276b4dbcd0d1       0369cf4303ffd       58 seconds ago      Running             etcd                        0                   e78080cd3b118
	bacd946d23b74       56cc512116c8f       5 minutes ago       Exited              busybox                     0                   19768d5cd129d
	
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2021-08-16 22:37:20 UTC, end at Mon 2021-08-16 22:44:12 UTC. --
	Aug 16 22:43:48 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:48.682202039Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/echoserver:1.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 16 22:43:48 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:48.683231100Z" level=info msg="PullImage \"k8s.gcr.io/echoserver:1.4\" returns image reference \"sha256:523cad1a4df732d41406c9de49f932cd60d56ffd50619158a2977fd1066028f9\""
	Aug 16 22:43:48 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:48.696387695Z" level=info msg="CreateContainer within sandbox \"cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,}"
	Aug 16 22:43:48 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:48.792913771Z" level=info msg="CreateContainer within sandbox \"cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,} returns container id \"2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce\""
	Aug 16 22:43:48 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:48.794324565Z" level=info msg="StartContainer for \"2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce\""
	Aug 16 22:43:49 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:49.307528092Z" level=info msg="StartContainer for \"2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce\" returns successfully"
	Aug 16 22:43:49 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:49.350883983Z" level=info msg="Finish piping stdout of container \"2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce\""
	Aug 16 22:43:49 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:49.351663852Z" level=info msg="Finish piping stderr of container \"2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce\""
	Aug 16 22:43:49 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:49.352353081Z" level=info msg="TaskExit event &TaskExit{ContainerID:2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce,ID:2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce,Pid:6958,ExitStatus:1,ExitedAt:2021-08-16 22:43:49.348607847 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:43:49 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:49.477777914Z" level=info msg="shim disconnected" id=2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce
	Aug 16 22:43:49 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:49.478370076Z" level=error msg="copy shim log" error="read /proc/self/fd/131: file already closed"
	Aug 16 22:43:50 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:50.388816633Z" level=info msg="CreateContainer within sandbox \"cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 16 22:43:50 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:50.462922609Z" level=info msg="CreateContainer within sandbox \"cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18\""
	Aug 16 22:43:50 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:50.466418463Z" level=info msg="StartContainer for \"1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18\""
	Aug 16 22:43:50 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:50.983588388Z" level=info msg="StartContainer for \"1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18\" returns successfully"
	Aug 16 22:43:51 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:51.011500871Z" level=info msg="Finish piping stderr of container \"1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18\""
	Aug 16 22:43:51 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:51.012123363Z" level=info msg="Finish piping stdout of container \"1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18\""
	Aug 16 22:43:51 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:51.013672716Z" level=info msg="TaskExit event &TaskExit{ContainerID:1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18,ID:1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18,Pid:7049,ExitStatus:1,ExitedAt:2021-08-16 22:43:51.013222589 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:43:51 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:51.091247728Z" level=info msg="shim disconnected" id=1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18
	Aug 16 22:43:51 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:51.091430689Z" level=error msg="copy shim log" error="read /proc/self/fd/131: file already closed"
	Aug 16 22:43:51 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:51.399506364Z" level=info msg="RemoveContainer for \"2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce\""
	Aug 16 22:43:51 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:51.423843307Z" level=info msg="RemoveContainer for \"2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce\" returns successfully"
	Aug 16 22:43:52 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:52.994762402Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:43:53 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:53.004005641Z" level=info msg="trying next host" error="failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" host=fake.domain
	Aug 16 22:43:53 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:53.010625463Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	
	* 
	* ==> coredns [acfa6acd622519fbbdfbc3cc49cca060cd8efae7eb741a5d9b0b773af45869f9] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.036925] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.901213] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1728 comm=systemd-network
	[  +0.883717] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.353046] vboxguest: loading out-of-tree module taints kernel.
	[  +0.010605] vboxguest: PCI device not found, probably running on physical hardware.
	[ +22.507740] systemd-fstab-generator[2072]: Ignoring "noauto" for root device
	[  +0.271233] systemd-fstab-generator[2103]: Ignoring "noauto" for root device
	[  +0.152317] systemd-fstab-generator[2118]: Ignoring "noauto" for root device
	[  +0.207312] systemd-fstab-generator[2149]: Ignoring "noauto" for root device
	[  +7.476654] systemd-fstab-generator[2341]: Ignoring "noauto" for root device
	[Aug16 22:38] kauditd_printk_skb: 38 callbacks suppressed
	[ +12.917331] kauditd_printk_skb: 77 callbacks suppressed
	[ +10.598492] kauditd_printk_skb: 29 callbacks suppressed
	[ +16.363247] kauditd_printk_skb: 95 callbacks suppressed
	[Aug16 22:39] NFSD: Unable to end grace period: -110
	[Aug16 22:43] kauditd_printk_skb: 17 callbacks suppressed
	[  +4.107316] systemd-fstab-generator[5453]: Ignoring "noauto" for root device
	[ +15.896257] systemd-fstab-generator[5858]: Ignoring "noauto" for root device
	[ +14.079160] kauditd_printk_skb: 53 callbacks suppressed
	[  +6.170983] kauditd_printk_skb: 68 callbacks suppressed
	[  +6.493037] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.988277] systemd-fstab-generator[7116]: Ignoring "noauto" for root device
	[  +0.783030] systemd-fstab-generator[7174]: Ignoring "noauto" for root device
	[  +0.981771] systemd-fstab-generator[7227]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522] <==
	* raft2021/08/16 22:43:14 INFO: 8a5e282779c32a12 switched to configuration voters=(9970450775056525842)
	2021-08-16 22:43:14.356157 W | auth: simple token is not cryptographically signed
	2021-08-16 22:43:14.368942 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-16 22:43:14.373040 I | etcdserver: 8a5e282779c32a12 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2021-08-16 22:43:14.374317 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-16 22:43:14.374531 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-16 22:43:14.374600 I | embed: listening for peers on 192.168.105.129:2380
	raft2021/08/16 22:43:14 INFO: 8a5e282779c32a12 switched to configuration voters=(9970450775056525842)
	2021-08-16 22:43:14.375295 I | etcdserver/membership: added member 8a5e282779c32a12 [https://192.168.105.129:2380] to cluster bbef9637db6db2f6
	raft2021/08/16 22:43:15 INFO: 8a5e282779c32a12 is starting a new election at term 1
	raft2021/08/16 22:43:15 INFO: 8a5e282779c32a12 became candidate at term 2
	raft2021/08/16 22:43:15 INFO: 8a5e282779c32a12 received MsgVoteResp from 8a5e282779c32a12 at term 2
	raft2021/08/16 22:43:15 INFO: 8a5e282779c32a12 became leader at term 2
	raft2021/08/16 22:43:15 INFO: raft.node: 8a5e282779c32a12 elected leader 8a5e282779c32a12 at term 2
	2021-08-16 22:43:15.353310 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-16 22:43:15.356492 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-16 22:43:15.356837 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-16 22:43:15.357115 I | etcdserver: published {Name:embed-certs-20210816223333-6986 ClientURLs:[https://192.168.105.129:2379]} to cluster bbef9637db6db2f6
	2021-08-16 22:43:15.360255 I | embed: ready to serve client requests
	2021-08-16 22:43:15.361142 I | embed: ready to serve client requests
	2021-08-16 22:43:15.362539 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-16 22:43:15.399026 I | embed: serving client requests on 192.168.105.129:2379
	2021-08-16 22:43:32.548674 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:43:40.849425 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:43:50.848648 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  22:44:24 up 7 min,  0 users,  load average: 1.23, 0.68, 0.31
	Linux embed-certs-20210816223333-6986 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [6d8fa8177082e340822efb9f6f8c7938b9c592b339aeb928ced35b94554c0f5c] <==
	* I0816 22:43:45.900277       1 client.go:360] parsed scheme: "passthrough"
	I0816 22:43:45.900332       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 22:43:45.900352       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0816 22:44:15.571285       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:16.032342       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:18.757235       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:18.759422       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:19.523483       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:19.523484       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:19.603018       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:19.653511       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:19.664928       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:19.770384       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:19.770908       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:19.795271       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:19.901217       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:20.133403       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:20.327509       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:22.949420       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:24.167654       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:24.565873       1 trace.go:205] Trace[995948218]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:44:12.961) (total time: 11604ms):
	Trace[995948218]: [11.604424135s] [11.604424135s] END
	E0816 22:44:24.566406       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(nil)}, sizeCache:0, unknownFields:[]uint8(nil), Code:14, Message:"transport is closing", Details:[]*anypb.Any(nil)}: rpc error: code = Unavailable desc = transport is closing
	I0816 22:44:24.567170       1 trace.go:205] Trace[241104453]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (16-Aug-2021 22:44:12.961) (total time: 11605ms):
	Trace[241104453]: [11.605819464s] [11.605819464s] END
	
	* 
	* ==> kube-controller-manager [00de61027a57cd55c5b0d43848cb3703f370bafd18f004a20bbd4b30f766c2f8] <==
	* I0816 22:43:39.844589       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0816 22:43:39.912990       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:43:39.931313       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0816 22:43:39.942376       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:43:39.980587       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:39.980927       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:43:39.980954       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:43:40.080124       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:43:40.185551       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:40.185988       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:43:40.193497       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:40.194751       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:43:40.266384       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:40.267647       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:43:40.272168       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	E0816 22:43:40.330751       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:40.331650       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:43:40.343449       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:40.347519       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:43:40.418141       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:40.418772       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:43:40.438355       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:40.441402       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:43:40.443642       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-6fqh5"
	I0816 22:43:40.501015       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-b5fzd"
	
	* 
	* ==> kube-proxy [d2b4592df27b12f825742f24ddd825a80ddb1dc22ba0ff1044a6d255e91c1480] <==
	* I0816 22:43:37.304647       1 node.go:172] Successfully retrieved node IP: 192.168.105.129
	I0816 22:43:37.305006       1 server_others.go:140] Detected node IP 192.168.105.129
	W0816 22:43:37.305154       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0816 22:43:37.440930       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0816 22:43:37.441035       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0816 22:43:37.496280       1 server_others.go:212] Using iptables Proxier.
	I0816 22:43:37.496904       1 server.go:643] Version: v1.21.3
	I0816 22:43:37.512871       1 config.go:315] Starting service config controller
	I0816 22:43:37.512906       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:43:37.512934       1 config.go:224] Starting endpoint slice config controller
	I0816 22:43:37.512938       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:43:37.528742       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 22:43:37.539491       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:43:37.681345       1 shared_informer.go:247] Caches are synced for service config 
	I0816 22:43:37.688494       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [5252b10be7bb737ec2020248c745eda7772c84f779b0b3c55c946d8a978aff12] <==
	* I0816 22:43:19.751185       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:43:19.753179       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:43:19.751209       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0816 22:43:19.762842       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:43:19.763262       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:43:19.763462       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:43:19.763799       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:43:19.764150       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:43:19.777531       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:43:19.777643       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:43:19.777700       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:43:19.777756       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:43:19.777835       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:43:19.777903       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:43:19.777970       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:43:19.780571       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:43:19.780655       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:43:20.706510       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:43:20.738573       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:43:20.808015       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:43:20.868559       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:43:20.927965       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:43:20.960608       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:43:21.029567       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0816 22:43:23.553368       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:37:20 UTC, end at Mon 2021-08-16 22:44:24 UTC. --
	Aug 16 22:43:40 embed-certs-20210816223333-6986 kubelet[5875]: I0816 22:43:40.567962    5875 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:43:40 embed-certs-20210816223333-6986 kubelet[5875]: I0816 22:43:40.574909    5875 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:43:40 embed-certs-20210816223333-6986 kubelet[5875]: I0816 22:43:40.646273    5875 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtwdt\" (UniqueName: \"kubernetes.io/projected/1375c060-0e73-47e5-b599-6d7e58617b31-kube-api-access-jtwdt\") pod \"kubernetes-dashboard-6fcdf4f6d-6fqh5\" (UID: \"1375c060-0e73-47e5-b599-6d7e58617b31\") "
	Aug 16 22:43:40 embed-certs-20210816223333-6986 kubelet[5875]: I0816 22:43:40.646645    5875 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjmc6\" (UniqueName: \"kubernetes.io/projected/2e56d6c3-4bfc-46cc-b263-0b26d0941122-kube-api-access-gjmc6\") pod \"dashboard-metrics-scraper-8685c45546-b5fzd\" (UID: \"2e56d6c3-4bfc-46cc-b263-0b26d0941122\") "
	Aug 16 22:43:40 embed-certs-20210816223333-6986 kubelet[5875]: I0816 22:43:40.646798    5875 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1375c060-0e73-47e5-b599-6d7e58617b31-tmp-volume\") pod \"kubernetes-dashboard-6fcdf4f6d-6fqh5\" (UID: \"1375c060-0e73-47e5-b599-6d7e58617b31\") "
	Aug 16 22:43:40 embed-certs-20210816223333-6986 kubelet[5875]: I0816 22:43:40.647033    5875 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2e56d6c3-4bfc-46cc-b263-0b26d0941122-tmp-volume\") pod \"dashboard-metrics-scraper-8685c45546-b5fzd\" (UID: \"2e56d6c3-4bfc-46cc-b263-0b26d0941122\") "
	Aug 16 22:43:40 embed-certs-20210816223333-6986 kubelet[5875]: E0816 22:43:40.884947    5875 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:43:40 embed-certs-20210816223333-6986 kubelet[5875]: E0816 22:43:40.885346    5875 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:43:40 embed-certs-20210816223333-6986 kubelet[5875]: E0816 22:43:40.885794    5875 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zkc86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler
{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vo
lumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-qfrpw_kube-system(abb75357-7b33-4327-aa7f-8e9c15a192f8): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:43:40 embed-certs-20210816223333-6986 kubelet[5875]: E0816 22:43:40.886039    5875 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-qfrpw" podUID=abb75357-7b33-4327-aa7f-8e9c15a192f8
	Aug 16 22:43:41 embed-certs-20210816223333-6986 kubelet[5875]: E0816 22:43:41.159929    5875 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-qfrpw" podUID=abb75357-7b33-4327-aa7f-8e9c15a192f8
	Aug 16 22:43:50 embed-certs-20210816223333-6986 kubelet[5875]: I0816 22:43:50.379528    5875 scope.go:111] "RemoveContainer" containerID="2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce"
	Aug 16 22:43:51 embed-certs-20210816223333-6986 kubelet[5875]: I0816 22:43:51.386605    5875 scope.go:111] "RemoveContainer" containerID="2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce"
	Aug 16 22:43:51 embed-certs-20210816223333-6986 kubelet[5875]: I0816 22:43:51.391433    5875 scope.go:111] "RemoveContainer" containerID="1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18"
	Aug 16 22:43:51 embed-certs-20210816223333-6986 kubelet[5875]: E0816 22:43:51.392020    5875 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-b5fzd_kubernetes-dashboard(2e56d6c3-4bfc-46cc-b263-0b26d0941122)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-b5fzd" podUID=2e56d6c3-4bfc-46cc-b263-0b26d0941122
	Aug 16 22:43:52 embed-certs-20210816223333-6986 kubelet[5875]: I0816 22:43:52.392946    5875 scope.go:111] "RemoveContainer" containerID="1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18"
	Aug 16 22:43:52 embed-certs-20210816223333-6986 kubelet[5875]: E0816 22:43:52.393670    5875 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-b5fzd_kubernetes-dashboard(2e56d6c3-4bfc-46cc-b263-0b26d0941122)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-b5fzd" podUID=2e56d6c3-4bfc-46cc-b263-0b26d0941122
	Aug 16 22:43:53 embed-certs-20210816223333-6986 kubelet[5875]: E0816 22:43:53.012916    5875 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:43:53 embed-certs-20210816223333-6986 kubelet[5875]: E0816 22:43:53.013980    5875 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:43:53 embed-certs-20210816223333-6986 kubelet[5875]: E0816 22:43:53.016764    5875 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zkc86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler
{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vo
lumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-qfrpw_kube-system(abb75357-7b33-4327-aa7f-8e9c15a192f8): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:43:53 embed-certs-20210816223333-6986 kubelet[5875]: E0816 22:43:53.020582    5875 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-qfrpw" podUID=abb75357-7b33-4327-aa7f-8e9c15a192f8
	Aug 16 22:43:55 embed-certs-20210816223333-6986 kubelet[5875]: I0816 22:43:55.694590    5875 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 16 22:43:55 embed-certs-20210816223333-6986 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:43:55 embed-certs-20210816223333-6986 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:43:55 embed-certs-20210816223333-6986 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [654de8c4789a875b3548d819c9b56cb6cb50e2799fcbb2b1e67efa6ea8278322] <==
	* 2021/08/16 22:43:42 Using namespace: kubernetes-dashboard
	2021/08/16 22:43:42 Using in-cluster config to connect to apiserver
	2021/08/16 22:43:42 Using secret token for csrf signing
	2021/08/16 22:43:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/16 22:43:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/16 22:43:42 Successful initial request to the apiserver, version: v1.21.3
	2021/08/16 22:43:42 Generating JWE encryption key
	2021/08/16 22:43:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/16 22:43:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/16 22:43:42 Initializing JWE encryption key from synchronized object
	2021/08/16 22:43:42 Creating in-cluster Sidecar client
	2021/08/16 22:43:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:43:42 Serving insecurely on HTTP port: 9090
	2021/08/16 22:43:42 Starting overwatch
	
	* 
	* ==> storage-provisioner [14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356] <==
	* I0816 22:43:41.676514       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 22:43:41.721917       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 22:43:41.726273       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 22:43:41.763614       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 22:43:41.767317       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"22b6cc21-5c58-4ad8-a870-b8e4ae262cb8", APIVersion:"v1", ResourceVersion:"596", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20210816223333-6986_bec95594-1072-419b-8255-c4a5dff4a2a2 became leader
	I0816 22:43:41.767655       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20210816223333-6986_bec95594-1072-419b-8255-c4a5dff4a2a2!
	I0816 22:43:41.877686       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20210816223333-6986_bec95594-1072-419b-8255-c4a5dff4a2a2!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:44:24.573513   20474 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server: rpc error: code = Unavailable desc = transport is closing
	 output: "\n** stderr ** \nError from server: rpc error: code = Unavailable desc = transport is closing\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210816223333-6986 -n embed-certs-20210816223333-6986

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210816223333-6986 -n embed-certs-20210816223333-6986: exit status 2 (15.727842141s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:44:40.898578   20524 status.go:422] Error apiserver status: https://192.168.105.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-20210816223333-6986 logs -n 25
E0816 22:44:43.975259    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816222225-6986/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p embed-certs-20210816223333-6986 logs -n 25: exit status 110 (1m1.2269664s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable metrics-server -p                          | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:44 UTC | Mon, 16 Aug 2021 22:34:45 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816223156-6986                 | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:31:56 UTC | Mon, 16 Aug 2021 22:35:04 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:33:33 UTC | Mon, 16 Aug 2021 22:35:08 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:16 UTC | Mon, 16 Aug 2021 22:35:17 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:19 UTC | Mon, 16 Aug 2021 22:35:20 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:18 UTC | Mon, 16 Aug 2021 22:35:42 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:51 UTC | Mon, 16 Aug 2021 22:35:52 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:45 UTC | Mon, 16 Aug 2021 22:36:18 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:18 UTC | Mon, 16 Aug 2021 22:36:18 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:17 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:20 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:52 UTC | Mon, 16 Aug 2021 22:37:25 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:37:25 UTC | Mon, 16 Aug 2021 22:37:25 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816223156-6986                 | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:42:31 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:42:42 UTC | Mon, 16 Aug 2021 22:42:42 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:18 UTC | Mon, 16 Aug 2021 22:43:37 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:43:44 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:43:47 UTC | Mon, 16 Aug 2021 22:43:47 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:37:25 UTC | Mon, 16 Aug 2021 22:43:52 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:43:55 UTC | Mon, 16 Aug 2021 22:43:55 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:44:03 UTC | Mon, 16 Aug 2021 22:44:03 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:44:30 UTC | Mon, 16 Aug 2021 22:44:30 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:44:31 UTC | Mon, 16 Aug 2021 22:44:31 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:44:31
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:44:31.336463   20709 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:44:31.336533   20709 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:44:31.336537   20709 out.go:311] Setting ErrFile to fd 2...
	I0816 22:44:31.336542   20709 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:44:31.336660   20709 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:44:31.336912   20709 out.go:305] Setting JSON to false
	I0816 22:44:31.372871   20709 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":5233,"bootTime":1629148638,"procs":183,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:44:31.372979   20709 start.go:121] virtualization: kvm guest
	I0816 22:44:31.375339   20709 out.go:177] * [newest-cni-20210816224431-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:44:31.376976   20709 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:44:31.375475   20709 notify.go:169] Checking for updates...
	I0816 22:44:31.378360   20709 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:44:31.379751   20709 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:44:31.381087   20709 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:44:31.381541   20709 config.go:177] Loaded profile config "default-k8s-different-port-20210816223418-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:44:31.381666   20709 config.go:177] Loaded profile config "embed-certs-20210816223333-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:44:31.381762   20709 config.go:177] Loaded profile config "old-k8s-version-20210816223154-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0816 22:44:31.381800   20709 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:44:31.415103   20709 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 22:44:31.415129   20709 start.go:278] selected driver: kvm2
	I0816 22:44:31.415136   20709 start.go:751] validating driver "kvm2" against <nil>
	I0816 22:44:31.415156   20709 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 22:44:31.417123   20709 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:44:31.417269   20709 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 22:44:31.428378   20709 install.go:137] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2 version is 1.22.0
	I0816 22:44:31.428425   20709 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	W0816 22:44:31.428448   20709 out.go:242] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0816 22:44:31.428583   20709 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 22:44:31.428608   20709 cni.go:93] Creating CNI manager for ""
	I0816 22:44:31.428616   20709 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:44:31.428624   20709 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 22:44:31.428632   20709 start_flags.go:277] config:
	{Name:newest-cni-20210816224431-6986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816224431-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:44:31.428727   20709 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:44:31.430575   20709 out.go:177] * Starting control plane node newest-cni-20210816224431-6986 in cluster newest-cni-20210816224431-6986
	I0816 22:44:31.430597   20709 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 22:44:31.430639   20709 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0816 22:44:31.430657   20709 cache.go:56] Caching tarball of preloaded images
	I0816 22:44:31.430757   20709 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 22:44:31.430778   20709 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on containerd
	I0816 22:44:31.430895   20709 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/config.json ...
	I0816 22:44:31.430918   20709 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/config.json: {Name:mkc9663018589074668a46d91251fc73622d0917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:44:31.431076   20709 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:44:31.431108   20709 start.go:313] acquiring machines lock for newest-cni-20210816224431-6986: {Name:mk808edd60d1305a42bb85791729eff4573dbb15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 22:44:31.431156   20709 start.go:317] acquired machines lock for "newest-cni-20210816224431-6986" in 32.129µs
	I0816 22:44:31.431179   20709 start.go:89] Provisioning new machine with config: &{Name:newest-cni-20210816224431-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816224431-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 Kubernetes
Version:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0816 22:44:31.431268   20709 start.go:126] createHost starting for "" (driver="kvm2")
	I0816 22:44:31.433330   20709 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 22:44:31.433460   20709 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:44:31.433512   20709 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:44:31.443654   20709 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46495
	I0816 22:44:31.444063   20709 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:44:31.444565   20709 main.go:130] libmachine: Using API Version  1
	I0816 22:44:31.444586   20709 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:44:31.444925   20709 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:44:31.445103   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetMachineName
	I0816 22:44:31.445239   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:44:31.445356   20709 start.go:160] libmachine.API.Create for "newest-cni-20210816224431-6986" (driver="kvm2")
	I0816 22:44:31.445394   20709 client.go:168] LocalClient.Create starting
	I0816 22:44:31.445428   20709 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem
	I0816 22:44:31.445462   20709 main.go:130] libmachine: Decoding PEM data...
	I0816 22:44:31.445480   20709 main.go:130] libmachine: Parsing certificate...
	I0816 22:44:31.445628   20709 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem
	I0816 22:44:31.445657   20709 main.go:130] libmachine: Decoding PEM data...
	I0816 22:44:31.445682   20709 main.go:130] libmachine: Parsing certificate...
	I0816 22:44:31.445747   20709 main.go:130] libmachine: Running pre-create checks...
	I0816 22:44:31.445765   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .PreCreateCheck
	I0816 22:44:31.446091   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetConfigRaw
	I0816 22:44:31.446492   20709 main.go:130] libmachine: Creating machine...
	I0816 22:44:31.446507   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Create
	I0816 22:44:31.446664   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Creating KVM machine...
	I0816 22:44:31.449397   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found existing default KVM network
	I0816 22:44:31.450933   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.450787   20733 network.go:240] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b4:06:d9}}
	I0816 22:44:31.452400   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.452341   20733 network.go:240] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f8:b7:da}}
	I0816 22:44:31.453464   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.453369   20733 network.go:240] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ee:53:1e}}
	I0816 22:44:31.454534   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.454466   20733 network.go:240] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:45:d3:67}}
	I0816 22:44:31.456524   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.456452   20733 network.go:240] skipping subnet 192.168.83.0/24 that is taken: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 Interface:{IfaceName:virbr5 IfaceIPv4:192.168.83.1 IfaceMTU:1500 IfaceMAC:52:54:00:ea:76:4e}}
	I0816 22:44:31.457759   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.457675   20733 network.go:240] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 Interface:{IfaceName:virbr6 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:52:54:00:6c:86:bd}}
	I0816 22:44:31.458795   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.458728   20733 network.go:240] skipping subnet 192.168.105.0/24 that is taken: &{IP:192.168.105.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.105.0/24 Gateway:192.168.105.1 ClientMin:192.168.105.2 ClientMax:192.168.105.254 Broadcast:192.168.105.255 Interface:{IfaceName:virbr7 IfaceIPv4:192.168.105.1 IfaceMTU:1500 IfaceMAC:52:54:00:ea:b2:03}}
	I0816 22:44:31.460187   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.460086   20733 network.go:288] reserving subnet 192.168.116.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.116.0:0xc0000be0b8] misses:0}
	I0816 22:44:31.460215   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.460127   20733 network.go:235] using free private subnet 192.168.116.0/24: &{IP:192.168.116.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.116.0/24 Gateway:192.168.116.1 ClientMin:192.168.116.2 ClientMax:192.168.116.254 Broadcast:192.168.116.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0816 22:44:31.497376   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | trying to create private KVM network mk-newest-cni-20210816224431-6986 192.168.116.0/24...
	I0816 22:44:31.783525   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | private KVM network mk-newest-cni-20210816224431-6986 192.168.116.0/24 created
	I0816 22:44:31.783566   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986 ...
	I0816 22:44:31.783588   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.783470   20733 common.go:101] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:44:31.783620   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0816 22:44:31.783744   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso...
	I0816 22:44:31.986209   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.986090   20733 common.go:108] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa...
	I0816 22:44:32.210064   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:32.209964   20733 common.go:114] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/newest-cni-20210816224431-6986.rawdisk...
	I0816 22:44:32.210106   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Writing magic tar header
	I0816 22:44:32.210184   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Writing SSH key tar header
	I0816 22:44:32.210290   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:32.210235   20733 common.go:128] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986 ...
	I0816 22:44:32.210373   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986
	I0816 22:44:32.210394   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines
	I0816 22:44:32.210410   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986 (perms=drwx------)
	I0816 22:44:32.210437   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:44:32.210461   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a
	I0816 22:44:32.210482   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines (perms=drwxr-xr-x)
	I0816 22:44:32.210497   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 22:44:32.210520   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube (perms=drwxr-xr-x)
	I0816 22:44:32.210544   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a (perms=drwxr-xr-x)
	I0816 22:44:32.210559   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0816 22:44:32.210573   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Checking permissions on dir: /home/jenkins
	I0816 22:44:32.210588   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Checking permissions on dir: /home
	I0816 22:44:32.210601   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Skipping /home - not owner
	I0816 22:44:32.210622   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 22:44:32.210643   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Creating domain...
	I0816 22:44:32.236885   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:cb:20:33 in network default
	I0816 22:44:32.237605   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Ensuring networks are active...
	I0816 22:44:32.237633   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:32.239810   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Ensuring network default is active
	I0816 22:44:32.240283   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Ensuring network mk-newest-cni-20210816224431-6986 is active
	I0816 22:44:32.240922   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Getting domain xml...
	I0816 22:44:32.242965   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Creating domain...
	I0816 22:44:32.738898   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Waiting to get IP...
	I0816 22:44:32.739904   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:32.740448   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:32.740502   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:32.740433   20733 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0816 22:44:33.004929   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:33.005411   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:33.005575   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:33.005505   20733 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0816 22:44:33.387930   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:33.388329   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:33.388355   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:33.388297   20733 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0816 22:44:33.812967   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:33.813440   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:33.813476   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:33.813379   20733 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0816 22:44:34.287851   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:34.288312   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:34.288339   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:34.288266   20733 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0816 22:44:34.876901   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:34.877366   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:34.877411   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:34.877323   20733 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0816 22:44:35.713609   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:35.714113   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:35.714144   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:35.714065   20733 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	1c2b60d03fede       523cad1a4df73       51 seconds ago       Exited              dashboard-metrics-scraper   1                   cc84c176c65f2
	654de8c4789a8       9a07b5b4bfac0       59 seconds ago       Running             kubernetes-dashboard        0                   bba06897412e6
	14afa3ade101a       6e38f40d628db       About a minute ago   Running             storage-provisioner         0                   762745540814d
	acfa6acd62251       296a6d5035e2d       About a minute ago   Running             coredns                     0                   73de0351bc3ba
	d2b4592df27b1       adb2816ea823a       About a minute ago   Running             kube-proxy                  0                   b72889a5e9646
	5252b10be7bb7       6be0dc1302e30       About a minute ago   Running             kube-scheduler              0                   3b689050c0847
	6d8fa8177082e       3d174f00aa39e       About a minute ago   Running             kube-apiserver              0                   1cd3b203f0f52
	00de61027a57c       bc2bb319a7038       About a minute ago   Running             kube-controller-manager     0                   3a8fef0311637
	3276b4dbcd0d1       0369cf4303ffd       About a minute ago   Running             etcd                        0                   e78080cd3b118
	bacd946d23b74       56cc512116c8f       5 minutes ago        Exited              busybox                     0                   19768d5cd129d
	
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2021-08-16 22:37:20 UTC, end at Mon 2021-08-16 22:44:41 UTC. --
	Aug 16 22:43:48 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:48.682202039Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/echoserver:1.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 16 22:43:48 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:48.683231100Z" level=info msg="PullImage \"k8s.gcr.io/echoserver:1.4\" returns image reference \"sha256:523cad1a4df732d41406c9de49f932cd60d56ffd50619158a2977fd1066028f9\""
	Aug 16 22:43:48 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:48.696387695Z" level=info msg="CreateContainer within sandbox \"cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,}"
	Aug 16 22:43:48 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:48.792913771Z" level=info msg="CreateContainer within sandbox \"cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:0,} returns container id \"2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce\""
	Aug 16 22:43:48 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:48.794324565Z" level=info msg="StartContainer for \"2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce\""
	Aug 16 22:43:49 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:49.307528092Z" level=info msg="StartContainer for \"2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce\" returns successfully"
	Aug 16 22:43:49 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:49.350883983Z" level=info msg="Finish piping stdout of container \"2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce\""
	Aug 16 22:43:49 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:49.351663852Z" level=info msg="Finish piping stderr of container \"2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce\""
	Aug 16 22:43:49 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:49.352353081Z" level=info msg="TaskExit event &TaskExit{ContainerID:2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce,ID:2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce,Pid:6958,ExitStatus:1,ExitedAt:2021-08-16 22:43:49.348607847 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:43:49 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:49.477777914Z" level=info msg="shim disconnected" id=2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce
	Aug 16 22:43:49 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:49.478370076Z" level=error msg="copy shim log" error="read /proc/self/fd/131: file already closed"
	Aug 16 22:43:50 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:50.388816633Z" level=info msg="CreateContainer within sandbox \"cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 16 22:43:50 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:50.462922609Z" level=info msg="CreateContainer within sandbox \"cc84c176c65f284accce54c66c72942ac0743905d770f9378728cfa2d6dfa72b\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18\""
	Aug 16 22:43:50 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:50.466418463Z" level=info msg="StartContainer for \"1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18\""
	Aug 16 22:43:50 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:50.983588388Z" level=info msg="StartContainer for \"1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18\" returns successfully"
	Aug 16 22:43:51 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:51.011500871Z" level=info msg="Finish piping stderr of container \"1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18\""
	Aug 16 22:43:51 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:51.012123363Z" level=info msg="Finish piping stdout of container \"1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18\""
	Aug 16 22:43:51 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:51.013672716Z" level=info msg="TaskExit event &TaskExit{ContainerID:1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18,ID:1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18,Pid:7049,ExitStatus:1,ExitedAt:2021-08-16 22:43:51.013222589 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:43:51 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:51.091247728Z" level=info msg="shim disconnected" id=1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18
	Aug 16 22:43:51 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:51.091430689Z" level=error msg="copy shim log" error="read /proc/self/fd/131: file already closed"
	Aug 16 22:43:51 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:51.399506364Z" level=info msg="RemoveContainer for \"2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce\""
	Aug 16 22:43:51 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:51.423843307Z" level=info msg="RemoveContainer for \"2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce\" returns successfully"
	Aug 16 22:43:52 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:52.994762402Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:43:53 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:53.004005641Z" level=info msg="trying next host" error="failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" host=fake.domain
	Aug 16 22:43:53 embed-certs-20210816223333-6986 containerd[2160]: time="2021-08-16T22:43:53.010625463Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	
	* 
	* ==> coredns [acfa6acd622519fbbdfbc3cc49cca060cd8efae7eb741a5d9b0b773af45869f9] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 73456d4e2f00e464e7e61576067882a6
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +0.036925] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.901213] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1728 comm=systemd-network
	[  +0.883717] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.353046] vboxguest: loading out-of-tree module taints kernel.
	[  +0.010605] vboxguest: PCI device not found, probably running on physical hardware.
	[ +22.507740] systemd-fstab-generator[2072]: Ignoring "noauto" for root device
	[  +0.271233] systemd-fstab-generator[2103]: Ignoring "noauto" for root device
	[  +0.152317] systemd-fstab-generator[2118]: Ignoring "noauto" for root device
	[  +0.207312] systemd-fstab-generator[2149]: Ignoring "noauto" for root device
	[  +7.476654] systemd-fstab-generator[2341]: Ignoring "noauto" for root device
	[Aug16 22:38] kauditd_printk_skb: 38 callbacks suppressed
	[ +12.917331] kauditd_printk_skb: 77 callbacks suppressed
	[ +10.598492] kauditd_printk_skb: 29 callbacks suppressed
	[ +16.363247] kauditd_printk_skb: 95 callbacks suppressed
	[Aug16 22:39] NFSD: Unable to end grace period: -110
	[Aug16 22:43] kauditd_printk_skb: 17 callbacks suppressed
	[  +4.107316] systemd-fstab-generator[5453]: Ignoring "noauto" for root device
	[ +15.896257] systemd-fstab-generator[5858]: Ignoring "noauto" for root device
	[ +14.079160] kauditd_printk_skb: 53 callbacks suppressed
	[  +6.170983] kauditd_printk_skb: 68 callbacks suppressed
	[  +6.493037] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.988277] systemd-fstab-generator[7116]: Ignoring "noauto" for root device
	[  +0.783030] systemd-fstab-generator[7174]: Ignoring "noauto" for root device
	[  +0.981771] systemd-fstab-generator[7227]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [3276b4dbcd0d1aec7607c754a7a7c9cb898aee7976f23dedf72aa17fc915f522] <==
	* raft2021/08/16 22:43:14 INFO: 8a5e282779c32a12 switched to configuration voters=(9970450775056525842)
	2021-08-16 22:43:14.356157 W | auth: simple token is not cryptographically signed
	2021-08-16 22:43:14.368942 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-16 22:43:14.373040 I | etcdserver: 8a5e282779c32a12 as single-node; fast-forwarding 9 ticks (election ticks 10)
	2021-08-16 22:43:14.374317 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-16 22:43:14.374531 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-16 22:43:14.374600 I | embed: listening for peers on 192.168.105.129:2380
	raft2021/08/16 22:43:14 INFO: 8a5e282779c32a12 switched to configuration voters=(9970450775056525842)
	2021-08-16 22:43:14.375295 I | etcdserver/membership: added member 8a5e282779c32a12 [https://192.168.105.129:2380] to cluster bbef9637db6db2f6
	raft2021/08/16 22:43:15 INFO: 8a5e282779c32a12 is starting a new election at term 1
	raft2021/08/16 22:43:15 INFO: 8a5e282779c32a12 became candidate at term 2
	raft2021/08/16 22:43:15 INFO: 8a5e282779c32a12 received MsgVoteResp from 8a5e282779c32a12 at term 2
	raft2021/08/16 22:43:15 INFO: 8a5e282779c32a12 became leader at term 2
	raft2021/08/16 22:43:15 INFO: raft.node: 8a5e282779c32a12 elected leader 8a5e282779c32a12 at term 2
	2021-08-16 22:43:15.353310 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-16 22:43:15.356492 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-16 22:43:15.356837 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-16 22:43:15.357115 I | etcdserver: published {Name:embed-certs-20210816223333-6986 ClientURLs:[https://192.168.105.129:2379]} to cluster bbef9637db6db2f6
	2021-08-16 22:43:15.360255 I | embed: ready to serve client requests
	2021-08-16 22:43:15.361142 I | embed: ready to serve client requests
	2021-08-16 22:43:15.362539 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-16 22:43:15.399026 I | embed: serving client requests on 192.168.105.129:2379
	2021-08-16 22:43:32.548674 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:43:40.849425 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:43:50.848648 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  22:45:41 up 8 min,  0 users,  load average: 0.35, 0.53, 0.28
	Linux embed-certs-20210816223333-6986 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [6d8fa8177082e340822efb9f6f8c7938b9c592b339aeb928ced35b94554c0f5c] <==
	* W0816 22:45:38.506190       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:45:40.350373       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:45:40.697847       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:45:41.152329       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:45:41.187196       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	E0816 22:45:41.533409       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0816 22:45:41.534324       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:45:41.534976       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:45:41.537491       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0816 22:45:41.537892       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	I0816 22:45:41.538753       1 trace.go:205] Trace[186997058]: "Get" url:/api/v1/namespaces/kube-public,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:44:41.533) (total time: 60004ms):
	Trace[186997058]: [1m0.004689761s] [1m0.004689761s] END
	E0816 22:45:41.540769       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:45:41.541955       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:45:41.543404       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0816 22:45:41.544732       1 trace.go:205] Trace[1422296447]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (16-Aug-2021 22:44:41.535) (total time: 60009ms):
	Trace[1422296447]: [1m0.009315203s] [1m0.009315203s] END
	I0816 22:45:41.707525       1 trace.go:205] Trace[871340274]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:44:41.708) (total time: 59999ms):
	Trace[871340274]: [59.999325739s] [59.999325739s] END
	E0816 22:45:41.708153       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	E0816 22:45:41.708302       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:45:41.709508       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:45:41.710714       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0816 22:45:41.712231       1 trace.go:205] Trace[1233159344]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (16-Aug-2021 22:44:41.708) (total time: 60004ms):
	Trace[1233159344]: [1m0.004065548s] [1m0.004065548s] END
	
	* 
	* ==> kube-controller-manager [00de61027a57cd55c5b0d43848cb3703f370bafd18f004a20bbd4b30f766c2f8] <==
	* I0816 22:43:39.844589       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-8685c45546 to 1"
	I0816 22:43:39.912990       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:43:39.931313       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0816 22:43:39.942376       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:43:39.980587       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:39.980927       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:43:39.980954       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:43:40.080124       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:43:40.185551       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:40.185988       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:43:40.193497       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:40.194751       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:43:40.266384       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:40.267647       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:43:40.272168       1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	E0816 22:43:40.330751       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:40.331650       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:43:40.343449       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:40.347519       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:43:40.418141       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:40.418772       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:43:40.438355       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:40.441402       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:43:40.443642       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-6fqh5"
	I0816 22:43:40.501015       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-b5fzd"
	
	* 
	* ==> kube-proxy [d2b4592df27b12f825742f24ddd825a80ddb1dc22ba0ff1044a6d255e91c1480] <==
	* I0816 22:43:37.304647       1 node.go:172] Successfully retrieved node IP: 192.168.105.129
	I0816 22:43:37.305006       1 server_others.go:140] Detected node IP 192.168.105.129
	W0816 22:43:37.305154       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0816 22:43:37.440930       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0816 22:43:37.441035       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0816 22:43:37.496280       1 server_others.go:212] Using iptables Proxier.
	I0816 22:43:37.496904       1 server.go:643] Version: v1.21.3
	I0816 22:43:37.512871       1 config.go:315] Starting service config controller
	I0816 22:43:37.512906       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0816 22:43:37.512934       1 config.go:224] Starting endpoint slice config controller
	I0816 22:43:37.512938       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	W0816 22:43:37.528742       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 22:43:37.539491       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:43:37.681345       1 shared_informer.go:247] Caches are synced for service config 
	I0816 22:43:37.688494       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	* 
	* ==> kube-scheduler [5252b10be7bb737ec2020248c745eda7772c84f779b0b3c55c946d8a978aff12] <==
	* I0816 22:43:19.751185       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:43:19.753179       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0816 22:43:19.751209       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0816 22:43:19.762842       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:43:19.763262       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:43:19.763462       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:43:19.763799       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:43:19.764150       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:43:19.777531       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:43:19.777643       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:43:19.777700       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:43:19.777756       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:43:19.777835       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:43:19.777903       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:43:19.777970       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:43:19.780571       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:43:19.780655       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:43:20.706510       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:43:20.738573       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:43:20.808015       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:43:20.868559       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:43:20.927965       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:43:20.960608       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:43:21.029567       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0816 22:43:23.553368       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:37:20 UTC, end at Mon 2021-08-16 22:45:42 UTC. --
	Aug 16 22:43:40 embed-certs-20210816223333-6986 kubelet[5875]: I0816 22:43:40.567962    5875 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:43:40 embed-certs-20210816223333-6986 kubelet[5875]: I0816 22:43:40.574909    5875 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:43:40 embed-certs-20210816223333-6986 kubelet[5875]: I0816 22:43:40.646273    5875 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtwdt\" (UniqueName: \"kubernetes.io/projected/1375c060-0e73-47e5-b599-6d7e58617b31-kube-api-access-jtwdt\") pod \"kubernetes-dashboard-6fcdf4f6d-6fqh5\" (UID: \"1375c060-0e73-47e5-b599-6d7e58617b31\") "
	Aug 16 22:43:40 embed-certs-20210816223333-6986 kubelet[5875]: I0816 22:43:40.646645    5875 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjmc6\" (UniqueName: \"kubernetes.io/projected/2e56d6c3-4bfc-46cc-b263-0b26d0941122-kube-api-access-gjmc6\") pod \"dashboard-metrics-scraper-8685c45546-b5fzd\" (UID: \"2e56d6c3-4bfc-46cc-b263-0b26d0941122\") "
	Aug 16 22:43:40 embed-certs-20210816223333-6986 kubelet[5875]: I0816 22:43:40.646798    5875 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1375c060-0e73-47e5-b599-6d7e58617b31-tmp-volume\") pod \"kubernetes-dashboard-6fcdf4f6d-6fqh5\" (UID: \"1375c060-0e73-47e5-b599-6d7e58617b31\") "
	Aug 16 22:43:40 embed-certs-20210816223333-6986 kubelet[5875]: I0816 22:43:40.647033    5875 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/2e56d6c3-4bfc-46cc-b263-0b26d0941122-tmp-volume\") pod \"dashboard-metrics-scraper-8685c45546-b5fzd\" (UID: \"2e56d6c3-4bfc-46cc-b263-0b26d0941122\") "
	Aug 16 22:43:40 embed-certs-20210816223333-6986 kubelet[5875]: E0816 22:43:40.884947    5875 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:43:40 embed-certs-20210816223333-6986 kubelet[5875]: E0816 22:43:40.885346    5875 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:43:40 embed-certs-20210816223333-6986 kubelet[5875]: E0816 22:43:40.885794    5875 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zkc86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler
{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vo
lumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-qfrpw_kube-system(abb75357-7b33-4327-aa7f-8e9c15a192f8): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:43:40 embed-certs-20210816223333-6986 kubelet[5875]: E0816 22:43:40.886039    5875 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-qfrpw" podUID=abb75357-7b33-4327-aa7f-8e9c15a192f8
	Aug 16 22:43:41 embed-certs-20210816223333-6986 kubelet[5875]: E0816 22:43:41.159929    5875 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-qfrpw" podUID=abb75357-7b33-4327-aa7f-8e9c15a192f8
	Aug 16 22:43:50 embed-certs-20210816223333-6986 kubelet[5875]: I0816 22:43:50.379528    5875 scope.go:111] "RemoveContainer" containerID="2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce"
	Aug 16 22:43:51 embed-certs-20210816223333-6986 kubelet[5875]: I0816 22:43:51.386605    5875 scope.go:111] "RemoveContainer" containerID="2d30bf3d6ff60b58a0589352322888cdb5d1583e38c088123d488a0b68df62ce"
	Aug 16 22:43:51 embed-certs-20210816223333-6986 kubelet[5875]: I0816 22:43:51.391433    5875 scope.go:111] "RemoveContainer" containerID="1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18"
	Aug 16 22:43:51 embed-certs-20210816223333-6986 kubelet[5875]: E0816 22:43:51.392020    5875 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-b5fzd_kubernetes-dashboard(2e56d6c3-4bfc-46cc-b263-0b26d0941122)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-b5fzd" podUID=2e56d6c3-4bfc-46cc-b263-0b26d0941122
	Aug 16 22:43:52 embed-certs-20210816223333-6986 kubelet[5875]: I0816 22:43:52.392946    5875 scope.go:111] "RemoveContainer" containerID="1c2b60d03fedea7b0b26f04ebb92d0672d271296499ec9548c8e67146d4d7c18"
	Aug 16 22:43:52 embed-certs-20210816223333-6986 kubelet[5875]: E0816 22:43:52.393670    5875 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-b5fzd_kubernetes-dashboard(2e56d6c3-4bfc-46cc-b263-0b26d0941122)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-b5fzd" podUID=2e56d6c3-4bfc-46cc-b263-0b26d0941122
	Aug 16 22:43:53 embed-certs-20210816223333-6986 kubelet[5875]: E0816 22:43:53.012916    5875 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:43:53 embed-certs-20210816223333-6986 kubelet[5875]: E0816 22:43:53.013980    5875 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:43:53 embed-certs-20210816223333-6986 kubelet[5875]: E0816 22:43:53.016764    5875 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-zkc86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler
{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]Vo
lumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-qfrpw_kube-system(abb75357-7b33-4327-aa7f-8e9c15a192f8): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:43:53 embed-certs-20210816223333-6986 kubelet[5875]: E0816 22:43:53.020582    5875 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-qfrpw" podUID=abb75357-7b33-4327-aa7f-8e9c15a192f8
	Aug 16 22:43:55 embed-certs-20210816223333-6986 kubelet[5875]: I0816 22:43:55.694590    5875 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 16 22:43:55 embed-certs-20210816223333-6986 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:43:55 embed-certs-20210816223333-6986 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:43:55 embed-certs-20210816223333-6986 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [654de8c4789a875b3548d819c9b56cb6cb50e2799fcbb2b1e67efa6ea8278322] <==
	* 2021/08/16 22:43:42 Using namespace: kubernetes-dashboard
	2021/08/16 22:43:42 Using in-cluster config to connect to apiserver
	2021/08/16 22:43:42 Using secret token for csrf signing
	2021/08/16 22:43:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/16 22:43:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/16 22:43:42 Successful initial request to the apiserver, version: v1.21.3
	2021/08/16 22:43:42 Generating JWE encryption key
	2021/08/16 22:43:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/16 22:43:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/16 22:43:42 Initializing JWE encryption key from synchronized object
	2021/08/16 22:43:42 Creating in-cluster Sidecar client
	2021/08/16 22:43:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:43:42 Serving insecurely on HTTP port: 9090
	2021/08/16 22:44:35 Metric client health check failed: an error on the server ("unknown") has prevented the request from succeeding (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:43:42 Starting overwatch
	
	* 
	* ==> storage-provisioner [14afa3ade101aaad8ab76009d14c87cd1a4a0cdfc157b53549fa4e3332470356] <==
	* I0816 22:43:41.676514       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0816 22:43:41.721917       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0816 22:43:41.726273       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0816 22:43:41.763614       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0816 22:43:41.767317       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"22b6cc21-5c58-4ad8-a870-b8e4ae262cb8", APIVersion:"v1", ResourceVersion:"596", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-20210816223333-6986_bec95594-1072-419b-8255-c4a5dff4a2a2 became leader
	I0816 22:43:41.767655       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-20210816223333-6986_bec95594-1072-419b-8255-c4a5dff4a2a2!
	I0816 22:43:41.877686       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-20210816223333-6986_bec95594-1072-419b-8255-c4a5dff4a2a2!
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:45:41.713987   20902 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)
	 output: "\n** stderr ** \nError from server (Timeout): the server was unable to return a response in the time allotted, but may still be processing the request (get nodes)\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (106.89s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (106.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20210816223418-6986 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-different-port-20210816223418-6986 --alsologtostderr -v=1: exit status 80 (2.427295005s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-different-port-20210816223418-6986 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 22:44:04.013525   20391 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:44:04.013603   20391 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:44:04.013610   20391 out.go:311] Setting ErrFile to fd 2...
	I0816 22:44:04.013613   20391 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:44:04.013718   20391 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:44:04.013870   20391 out.go:305] Setting JSON to false
	I0816 22:44:04.013890   20391 mustload.go:65] Loading cluster: default-k8s-different-port-20210816223418-6986
	I0816 22:44:04.014214   20391 config.go:177] Loaded profile config "default-k8s-different-port-20210816223418-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:44:04.015018   20391 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:44:04.015156   20391 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:44:04.026536   20391 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40517
	I0816 22:44:04.026995   20391 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:44:04.027499   20391 main.go:130] libmachine: Using API Version  1
	I0816 22:44:04.027519   20391 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:44:04.027928   20391 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:44:04.028101   20391 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetState
	I0816 22:44:04.031128   20391 host.go:66] Checking if "default-k8s-different-port-20210816223418-6986" exists ...
	I0816 22:44:04.031526   20391 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:44:04.031566   20391 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:44:04.041689   20391 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35843
	I0816 22:44:04.042121   20391 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:44:04.042515   20391 main.go:130] libmachine: Using API Version  1
	I0816 22:44:04.042535   20391 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:44:04.042833   20391 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:44:04.043010   20391 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:44:04.043730   20391 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-different-port-20210816223418-6986 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0816 22:44:04.046304   20391 out.go:177] * Pausing node default-k8s-different-port-20210816223418-6986 ... 
	I0816 22:44:04.046329   20391 host.go:66] Checking if "default-k8s-different-port-20210816223418-6986" exists ...
	I0816 22:44:04.046749   20391 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:44:04.046789   20391 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:44:04.057783   20391 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46515
	I0816 22:44:04.058248   20391 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:44:04.058719   20391 main.go:130] libmachine: Using API Version  1
	I0816 22:44:04.058742   20391 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:44:04.059089   20391 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:44:04.059243   20391 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:44:04.059484   20391 ssh_runner.go:149] Run: systemctl --version
	I0816 22:44:04.059513   20391 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:44:04.065694   20391 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:44:04.066098   20391 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:44:04.066130   20391 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:44:04.066300   20391 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:44:04.066454   20391 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:44:04.066582   20391 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:44:04.066690   20391 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:44:04.180850   20391 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:44:04.191125   20391 pause.go:50] kubelet running: true
	I0816 22:44:04.191192   20391 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:44:04.417140   20391 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:44:04.417238   20391 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:44:04.539897   20391 cri.go:76] found id: "dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79"
	I0816 22:44:04.539927   20391 cri.go:76] found id: "def14af399b574c5412b6c393d65b7d0b6ae9f71d7709f25fe7a7956acf00e0c"
	I0816 22:44:04.539932   20391 cri.go:76] found id: "59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5"
	I0816 22:44:04.539936   20391 cri.go:76] found id: "57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8"
	I0816 22:44:04.539939   20391 cri.go:76] found id: "91c23817fb7abda2b7c16241219d51e4ed678c5cb58792aa6ddbebafd8c8e1d7"
	I0816 22:44:04.539943   20391 cri.go:76] found id: "b98cd396fdb126a029db11a0dc2fba55c059c5e1325b67b4a634d563af29288b"
	I0816 22:44:04.539947   20391 cri.go:76] found id: "ee6e3ad90fb4e19819658b68382724e6a5cf8fe284be81a4b38efc7636b1dde1"
	I0816 22:44:04.539950   20391 cri.go:76] found id: "cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363"
	I0816 22:44:04.539953   20391 cri.go:76] found id: "091f1b89bc6ac929a99a8c2d63bb67a16934b4902698c9995a38073d0dda1f1f"
	I0816 22:44:04.539960   20391 cri.go:76] found id: ""
	I0816 22:44:04.540028   20391 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0816 22:44:04.587177   20391 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"091f1b89bc6ac929a99a8c2d63bb67a16934b4902698c9995a38073d0dda1f1f","pid":6652,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/091f1b89bc6ac929a99a8c2d63bb67a16934b4902698c9995a38073d0dda1f1f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/091f1b89bc6ac929a99a8c2d63bb67a16934b4902698c9995a38073d0dda1f1f/rootfs","created":"2021-08-16T22:43:52.766767443Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"35bd4253798955a15f9479aac90343746cf3663478ae758392ad47246b5e967a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0bbbf0f687140f3a58e4d946ef636533bb339483f7f898eb3a8752eed74d652c","pid":5392,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0bbbf0f687140f3a58e4d946ef636533bb339483f7f898eb3a8752eed74d652c","rootfs":"/run/containerd/io.containe
rd.runtime.v2.task/k8s.io/0bbbf0f687140f3a58e4d946ef636533bb339483f7f898eb3a8752eed74d652c/rootfs","created":"2021-08-16T22:43:22.287515984Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0bbbf0f687140f3a58e4d946ef636533bb339483f7f898eb3a8752eed74d652c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-default-k8s-different-port-20210816223418-6986_bf7898b008f5522b40af5e4944af35da"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1edc418e603d15886d25351e515b8f0101a592ddb903d3d6501fb8c5f472d2bf","pid":5383,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1edc418e603d15886d25351e515b8f0101a592ddb903d3d6501fb8c5f472d2bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1edc418e603d15886d25351e515b8f0101a592ddb903d3d6501fb8c5f472d2bf/rootfs","created":"2021-08-16T22:43:22.257807817Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1edc4
18e603d15886d25351e515b8f0101a592ddb903d3d6501fb8c5f472d2bf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-default-k8s-different-port-20210816223418-6986_f5b0868343e4393a2db84d0a125fb9b8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"232baf5e6e81875c27def143a91aee80a3e1ef739a9f3162089780ea14ea7a3a","pid":5377,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/232baf5e6e81875c27def143a91aee80a3e1ef739a9f3162089780ea14ea7a3a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/232baf5e6e81875c27def143a91aee80a3e1ef739a9f3162089780ea14ea7a3a/rootfs","created":"2021-08-16T22:43:22.268582228Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"232baf5e6e81875c27def143a91aee80a3e1ef739a9f3162089780ea14ea7a3a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-default-k8s-different-port-20210816223418-6986_2cec70d99baaffdb32771ad61e5d108f"},"owner":"root"},{"ociVersion":"
1.0.2-dev","id":"35bd4253798955a15f9479aac90343746cf3663478ae758392ad47246b5e967a","pid":6523,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35bd4253798955a15f9479aac90343746cf3663478ae758392ad47246b5e967a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35bd4253798955a15f9479aac90343746cf3663478ae758392ad47246b5e967a/rootfs","created":"2021-08-16T22:43:51.910791273Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"35bd4253798955a15f9479aac90343746cf3663478ae758392ad47246b5e967a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-7gstk_806d3966-d956-400b-b825-eb1393026138"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a8ee57b53e2eea1b5e785b06a20b761d3cb64d70e8a2cea5617170c35c023ed","pid":5369,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a8ee57b53e2eea1b5e785b06a20b761d3cb64d70e8a2cea5617170c35c023ed","rootfs":"/run/con
tainerd/io.containerd.runtime.v2.task/k8s.io/3a8ee57b53e2eea1b5e785b06a20b761d3cb64d70e8a2cea5617170c35c023ed/rootfs","created":"2021-08-16T22:43:22.261992822Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3a8ee57b53e2eea1b5e785b06a20b761d3cb64d70e8a2cea5617170c35c023ed","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-different-port-20210816223418-6986_9666ed4138865243da78558f0114d546"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"52b506e6ea8d483556e6d99b63d5182ab7f64ecf9decd63acb67b80ad56ff9c9","pid":6342,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/52b506e6ea8d483556e6d99b63d5182ab7f64ecf9decd63acb67b80ad56ff9c9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/52b506e6ea8d483556e6d99b63d5182ab7f64ecf9decd63acb67b80ad56ff9c9/rootfs","created":"2021-08-16T22:43:51.130909911Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-
id":"52b506e6ea8d483556e6d99b63d5182ab7f64ecf9decd63acb67b80ad56ff9c9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-pbxnr_fa2d27a5-b243-4a8f-9450-b834d1ce5bb0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8","pid":5562,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8/rootfs","created":"2021-08-16T22:43:23.672123264Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1edc418e603d15886d25351e515b8f0101a592ddb903d3d6501fb8c5f472d2bf"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5","pid":5884,"status":"running","bundle":"/run
/containerd/io.containerd.runtime.v2.task/k8s.io/59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5/rootfs","created":"2021-08-16T22:43:46.387583926Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ba0dd77bce835f39513823292ed6e73aed2799df561589783949cdd279ca198f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"91c23817fb7abda2b7c16241219d51e4ed678c5cb58792aa6ddbebafd8c8e1d7","pid":5526,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/91c23817fb7abda2b7c16241219d51e4ed678c5cb58792aa6ddbebafd8c8e1d7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/91c23817fb7abda2b7c16241219d51e4ed678c5cb58792aa6ddbebafd8c8e1d7/rootfs","created":"2021-08-16T22:43:23.510844339Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","i
o.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"232baf5e6e81875c27def143a91aee80a3e1ef739a9f3162089780ea14ea7a3a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"94f02cd4190110c6a14cb72203c80ac478f0f42f6d9c58cdd09fafa3f66255e6","pid":6529,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/94f02cd4190110c6a14cb72203c80ac478f0f42f6d9c58cdd09fafa3f66255e6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/94f02cd4190110c6a14cb72203c80ac478f0f42f6d9c58cdd09fafa3f66255e6/rootfs","created":"2021-08-16T22:43:51.942669791Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"94f02cd4190110c6a14cb72203c80ac478f0f42f6d9c58cdd09fafa3f66255e6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-nctsf_3023a30f-e167-4d2d-9cb7-5f01b3a89700"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b98cd396fdb126a029db11a0dc2fba55c059c5e1325b67b4a634d563af2928
8b","pid":5524,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b98cd396fdb126a029db11a0dc2fba55c059c5e1325b67b4a634d563af29288b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b98cd396fdb126a029db11a0dc2fba55c059c5e1325b67b4a634d563af29288b/rootfs","created":"2021-08-16T22:43:23.422552219Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0bbbf0f687140f3a58e4d946ef636533bb339483f7f898eb3a8752eed74d652c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ba0dd77bce835f39513823292ed6e73aed2799df561589783949cdd279ca198f","pid":5850,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba0dd77bce835f39513823292ed6e73aed2799df561589783949cdd279ca198f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba0dd77bce835f39513823292ed6e73aed2799df561589783949cdd279ca198f/rootfs","created":"2021-08-16T22:43:46.091778395Z","annotat
ions":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ba0dd77bce835f39513823292ed6e73aed2799df561589783949cdd279ca198f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-qhsq8_9abb9351-b721-48bb-94b9-887b5afc7584"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd0f8b9910c05b25b831041c22c7df04d08a9baabb20b7d5b37d960440fe4cbf","pid":6338,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd0f8b9910c05b25b831041c22c7df04d08a9baabb20b7d5b37d960440fe4cbf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd0f8b9910c05b25b831041c22c7df04d08a9baabb20b7d5b37d960440fe4cbf/rootfs","created":"2021-08-16T22:43:51.498813115Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cd0f8b9910c05b25b831041c22c7df04d08a9baabb20b7d5b37d960440fe4cbf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_a88a523b-5707-46b9-b7cf-6931db0d4487"},"owner":"ro
ot"},{"ociVersion":"1.0.2-dev","id":"def14af399b574c5412b6c393d65b7d0b6ae9f71d7709f25fe7a7956acf00e0c","pid":6181,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/def14af399b574c5412b6c393d65b7d0b6ae9f71d7709f25fe7a7956acf00e0c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/def14af399b574c5412b6c393d65b7d0b6ae9f71d7709f25fe7a7956acf00e0c/rootfs","created":"2021-08-16T22:43:48.773575867Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e49dbc663b365fe821c92558277f0c3ae21637c6a27e32074f2b85f6fdd0fde8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79","pid":6676,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dfd15db9428f264623cd687de765401bc3b7f7293
c3ecad264cc26e1ff22cd79/rootfs","created":"2021-08-16T22:43:52.897941235Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"cd0f8b9910c05b25b831041c22c7df04d08a9baabb20b7d5b37d960440fe4cbf"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e49dbc663b365fe821c92558277f0c3ae21637c6a27e32074f2b85f6fdd0fde8","pid":6126,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e49dbc663b365fe821c92558277f0c3ae21637c6a27e32074f2b85f6fdd0fde8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e49dbc663b365fe821c92558277f0c3ae21637c6a27e32074f2b85f6fdd0fde8/rootfs","created":"2021-08-16T22:43:47.805663518Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e49dbc663b365fe821c92558277f0c3ae21637c6a27e32074f2b85f6fdd0fde8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-jvhn9_3c48c2dc-4beb-4359-aadc-136
5db48feac"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ee6e3ad90fb4e19819658b68382724e6a5cf8fe284be81a4b38efc7636b1dde1","pid":5449,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee6e3ad90fb4e19819658b68382724e6a5cf8fe284be81a4b38efc7636b1dde1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee6e3ad90fb4e19819658b68382724e6a5cf8fe284be81a4b38efc7636b1dde1/rootfs","created":"2021-08-16T22:43:22.926222712Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3a8ee57b53e2eea1b5e785b06a20b761d3cb64d70e8a2cea5617170c35c023ed"},"owner":"root"}]
	I0816 22:44:04.587370   20391 cri.go:113] list returned 18 containers
	I0816 22:44:04.587385   20391 cri.go:116] container: {ID:091f1b89bc6ac929a99a8c2d63bb67a16934b4902698c9995a38073d0dda1f1f Status:running}
	I0816 22:44:04.587411   20391 cri.go:116] container: {ID:0bbbf0f687140f3a58e4d946ef636533bb339483f7f898eb3a8752eed74d652c Status:running}
	I0816 22:44:04.587416   20391 cri.go:118] skipping 0bbbf0f687140f3a58e4d946ef636533bb339483f7f898eb3a8752eed74d652c - not in ps
	I0816 22:44:04.587421   20391 cri.go:116] container: {ID:1edc418e603d15886d25351e515b8f0101a592ddb903d3d6501fb8c5f472d2bf Status:running}
	I0816 22:44:04.587425   20391 cri.go:118] skipping 1edc418e603d15886d25351e515b8f0101a592ddb903d3d6501fb8c5f472d2bf - not in ps
	I0816 22:44:04.587430   20391 cri.go:116] container: {ID:232baf5e6e81875c27def143a91aee80a3e1ef739a9f3162089780ea14ea7a3a Status:running}
	I0816 22:44:04.587434   20391 cri.go:118] skipping 232baf5e6e81875c27def143a91aee80a3e1ef739a9f3162089780ea14ea7a3a - not in ps
	I0816 22:44:04.587439   20391 cri.go:116] container: {ID:35bd4253798955a15f9479aac90343746cf3663478ae758392ad47246b5e967a Status:running}
	I0816 22:44:04.587443   20391 cri.go:118] skipping 35bd4253798955a15f9479aac90343746cf3663478ae758392ad47246b5e967a - not in ps
	I0816 22:44:04.587453   20391 cri.go:116] container: {ID:3a8ee57b53e2eea1b5e785b06a20b761d3cb64d70e8a2cea5617170c35c023ed Status:running}
	I0816 22:44:04.587460   20391 cri.go:118] skipping 3a8ee57b53e2eea1b5e785b06a20b761d3cb64d70e8a2cea5617170c35c023ed - not in ps
	I0816 22:44:04.587464   20391 cri.go:116] container: {ID:52b506e6ea8d483556e6d99b63d5182ab7f64ecf9decd63acb67b80ad56ff9c9 Status:running}
	I0816 22:44:04.587470   20391 cri.go:118] skipping 52b506e6ea8d483556e6d99b63d5182ab7f64ecf9decd63acb67b80ad56ff9c9 - not in ps
	I0816 22:44:04.587473   20391 cri.go:116] container: {ID:57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8 Status:running}
	I0816 22:44:04.587478   20391 cri.go:116] container: {ID:59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5 Status:running}
	I0816 22:44:04.587483   20391 cri.go:116] container: {ID:91c23817fb7abda2b7c16241219d51e4ed678c5cb58792aa6ddbebafd8c8e1d7 Status:running}
	I0816 22:44:04.587487   20391 cri.go:116] container: {ID:94f02cd4190110c6a14cb72203c80ac478f0f42f6d9c58cdd09fafa3f66255e6 Status:running}
	I0816 22:44:04.587492   20391 cri.go:118] skipping 94f02cd4190110c6a14cb72203c80ac478f0f42f6d9c58cdd09fafa3f66255e6 - not in ps
	I0816 22:44:04.587498   20391 cri.go:116] container: {ID:b98cd396fdb126a029db11a0dc2fba55c059c5e1325b67b4a634d563af29288b Status:running}
	I0816 22:44:04.587503   20391 cri.go:116] container: {ID:ba0dd77bce835f39513823292ed6e73aed2799df561589783949cdd279ca198f Status:running}
	I0816 22:44:04.587508   20391 cri.go:118] skipping ba0dd77bce835f39513823292ed6e73aed2799df561589783949cdd279ca198f - not in ps
	I0816 22:44:04.587512   20391 cri.go:116] container: {ID:cd0f8b9910c05b25b831041c22c7df04d08a9baabb20b7d5b37d960440fe4cbf Status:running}
	I0816 22:44:04.587517   20391 cri.go:118] skipping cd0f8b9910c05b25b831041c22c7df04d08a9baabb20b7d5b37d960440fe4cbf - not in ps
	I0816 22:44:04.587521   20391 cri.go:116] container: {ID:def14af399b574c5412b6c393d65b7d0b6ae9f71d7709f25fe7a7956acf00e0c Status:running}
	I0816 22:44:04.587525   20391 cri.go:116] container: {ID:dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79 Status:running}
	I0816 22:44:04.587528   20391 cri.go:116] container: {ID:e49dbc663b365fe821c92558277f0c3ae21637c6a27e32074f2b85f6fdd0fde8 Status:running}
	I0816 22:44:04.587533   20391 cri.go:118] skipping e49dbc663b365fe821c92558277f0c3ae21637c6a27e32074f2b85f6fdd0fde8 - not in ps
	I0816 22:44:04.587538   20391 cri.go:116] container: {ID:ee6e3ad90fb4e19819658b68382724e6a5cf8fe284be81a4b38efc7636b1dde1 Status:running}
	I0816 22:44:04.587580   20391 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 091f1b89bc6ac929a99a8c2d63bb67a16934b4902698c9995a38073d0dda1f1f
	I0816 22:44:04.609460   20391 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 091f1b89bc6ac929a99a8c2d63bb67a16934b4902698c9995a38073d0dda1f1f 57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8
	I0816 22:44:04.629433   20391 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 091f1b89bc6ac929a99a8c2d63bb67a16934b4902698c9995a38073d0dda1f1f 57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:44:04Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0816 22:44:04.905871   20391 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:44:04.918091   20391 pause.go:50] kubelet running: false
	I0816 22:44:04.918138   20391 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:44:05.150269   20391 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:44:05.150366   20391 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:44:05.316627   20391 cri.go:76] found id: "dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79"
	I0816 22:44:05.316653   20391 cri.go:76] found id: "def14af399b574c5412b6c393d65b7d0b6ae9f71d7709f25fe7a7956acf00e0c"
	I0816 22:44:05.316658   20391 cri.go:76] found id: "59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5"
	I0816 22:44:05.316663   20391 cri.go:76] found id: "57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8"
	I0816 22:44:05.316669   20391 cri.go:76] found id: "91c23817fb7abda2b7c16241219d51e4ed678c5cb58792aa6ddbebafd8c8e1d7"
	I0816 22:44:05.316674   20391 cri.go:76] found id: "b98cd396fdb126a029db11a0dc2fba55c059c5e1325b67b4a634d563af29288b"
	I0816 22:44:05.316687   20391 cri.go:76] found id: "ee6e3ad90fb4e19819658b68382724e6a5cf8fe284be81a4b38efc7636b1dde1"
	I0816 22:44:05.316694   20391 cri.go:76] found id: "cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363"
	I0816 22:44:05.316701   20391 cri.go:76] found id: "091f1b89bc6ac929a99a8c2d63bb67a16934b4902698c9995a38073d0dda1f1f"
	I0816 22:44:05.316716   20391 cri.go:76] found id: ""
	I0816 22:44:05.316758   20391 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0816 22:44:05.368195   20391 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"091f1b89bc6ac929a99a8c2d63bb67a16934b4902698c9995a38073d0dda1f1f","pid":6652,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/091f1b89bc6ac929a99a8c2d63bb67a16934b4902698c9995a38073d0dda1f1f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/091f1b89bc6ac929a99a8c2d63bb67a16934b4902698c9995a38073d0dda1f1f/rootfs","created":"2021-08-16T22:43:52.766767443Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"35bd4253798955a15f9479aac90343746cf3663478ae758392ad47246b5e967a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0bbbf0f687140f3a58e4d946ef636533bb339483f7f898eb3a8752eed74d652c","pid":5392,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0bbbf0f687140f3a58e4d946ef636533bb339483f7f898eb3a8752eed74d652c","rootfs":"/run/containerd/io.container
d.runtime.v2.task/k8s.io/0bbbf0f687140f3a58e4d946ef636533bb339483f7f898eb3a8752eed74d652c/rootfs","created":"2021-08-16T22:43:22.287515984Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0bbbf0f687140f3a58e4d946ef636533bb339483f7f898eb3a8752eed74d652c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-default-k8s-different-port-20210816223418-6986_bf7898b008f5522b40af5e4944af35da"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1edc418e603d15886d25351e515b8f0101a592ddb903d3d6501fb8c5f472d2bf","pid":5383,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1edc418e603d15886d25351e515b8f0101a592ddb903d3d6501fb8c5f472d2bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1edc418e603d15886d25351e515b8f0101a592ddb903d3d6501fb8c5f472d2bf/rootfs","created":"2021-08-16T22:43:22.257807817Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1edc41
8e603d15886d25351e515b8f0101a592ddb903d3d6501fb8c5f472d2bf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-default-k8s-different-port-20210816223418-6986_f5b0868343e4393a2db84d0a125fb9b8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"232baf5e6e81875c27def143a91aee80a3e1ef739a9f3162089780ea14ea7a3a","pid":5377,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/232baf5e6e81875c27def143a91aee80a3e1ef739a9f3162089780ea14ea7a3a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/232baf5e6e81875c27def143a91aee80a3e1ef739a9f3162089780ea14ea7a3a/rootfs","created":"2021-08-16T22:43:22.268582228Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"232baf5e6e81875c27def143a91aee80a3e1ef739a9f3162089780ea14ea7a3a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-default-k8s-different-port-20210816223418-6986_2cec70d99baaffdb32771ad61e5d108f"},"owner":"root"},{"ociVersion":"1
.0.2-dev","id":"35bd4253798955a15f9479aac90343746cf3663478ae758392ad47246b5e967a","pid":6523,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35bd4253798955a15f9479aac90343746cf3663478ae758392ad47246b5e967a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35bd4253798955a15f9479aac90343746cf3663478ae758392ad47246b5e967a/rootfs","created":"2021-08-16T22:43:51.910791273Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"35bd4253798955a15f9479aac90343746cf3663478ae758392ad47246b5e967a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-7gstk_806d3966-d956-400b-b825-eb1393026138"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a8ee57b53e2eea1b5e785b06a20b761d3cb64d70e8a2cea5617170c35c023ed","pid":5369,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a8ee57b53e2eea1b5e785b06a20b761d3cb64d70e8a2cea5617170c35c023ed","rootfs":"/run/cont
ainerd/io.containerd.runtime.v2.task/k8s.io/3a8ee57b53e2eea1b5e785b06a20b761d3cb64d70e8a2cea5617170c35c023ed/rootfs","created":"2021-08-16T22:43:22.261992822Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3a8ee57b53e2eea1b5e785b06a20b761d3cb64d70e8a2cea5617170c35c023ed","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-different-port-20210816223418-6986_9666ed4138865243da78558f0114d546"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"52b506e6ea8d483556e6d99b63d5182ab7f64ecf9decd63acb67b80ad56ff9c9","pid":6342,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/52b506e6ea8d483556e6d99b63d5182ab7f64ecf9decd63acb67b80ad56ff9c9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/52b506e6ea8d483556e6d99b63d5182ab7f64ecf9decd63acb67b80ad56ff9c9/rootfs","created":"2021-08-16T22:43:51.130909911Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-i
d":"52b506e6ea8d483556e6d99b63d5182ab7f64ecf9decd63acb67b80ad56ff9c9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-pbxnr_fa2d27a5-b243-4a8f-9450-b834d1ce5bb0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8","pid":5562,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8/rootfs","created":"2021-08-16T22:43:23.672123264Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1edc418e603d15886d25351e515b8f0101a592ddb903d3d6501fb8c5f472d2bf"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5","pid":5884,"status":"running","bundle":"/run/
containerd/io.containerd.runtime.v2.task/k8s.io/59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5/rootfs","created":"2021-08-16T22:43:46.387583926Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ba0dd77bce835f39513823292ed6e73aed2799df561589783949cdd279ca198f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"91c23817fb7abda2b7c16241219d51e4ed678c5cb58792aa6ddbebafd8c8e1d7","pid":5526,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/91c23817fb7abda2b7c16241219d51e4ed678c5cb58792aa6ddbebafd8c8e1d7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/91c23817fb7abda2b7c16241219d51e4ed678c5cb58792aa6ddbebafd8c8e1d7/rootfs","created":"2021-08-16T22:43:23.510844339Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io
.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"232baf5e6e81875c27def143a91aee80a3e1ef739a9f3162089780ea14ea7a3a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"94f02cd4190110c6a14cb72203c80ac478f0f42f6d9c58cdd09fafa3f66255e6","pid":6529,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/94f02cd4190110c6a14cb72203c80ac478f0f42f6d9c58cdd09fafa3f66255e6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/94f02cd4190110c6a14cb72203c80ac478f0f42f6d9c58cdd09fafa3f66255e6/rootfs","created":"2021-08-16T22:43:51.942669791Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"94f02cd4190110c6a14cb72203c80ac478f0f42f6d9c58cdd09fafa3f66255e6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-nctsf_3023a30f-e167-4d2d-9cb7-5f01b3a89700"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b98cd396fdb126a029db11a0dc2fba55c059c5e1325b67b4a634d563af29288
b","pid":5524,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b98cd396fdb126a029db11a0dc2fba55c059c5e1325b67b4a634d563af29288b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b98cd396fdb126a029db11a0dc2fba55c059c5e1325b67b4a634d563af29288b/rootfs","created":"2021-08-16T22:43:23.422552219Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0bbbf0f687140f3a58e4d946ef636533bb339483f7f898eb3a8752eed74d652c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ba0dd77bce835f39513823292ed6e73aed2799df561589783949cdd279ca198f","pid":5850,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba0dd77bce835f39513823292ed6e73aed2799df561589783949cdd279ca198f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba0dd77bce835f39513823292ed6e73aed2799df561589783949cdd279ca198f/rootfs","created":"2021-08-16T22:43:46.091778395Z","annotati
ons":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ba0dd77bce835f39513823292ed6e73aed2799df561589783949cdd279ca198f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-qhsq8_9abb9351-b721-48bb-94b9-887b5afc7584"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd0f8b9910c05b25b831041c22c7df04d08a9baabb20b7d5b37d960440fe4cbf","pid":6338,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd0f8b9910c05b25b831041c22c7df04d08a9baabb20b7d5b37d960440fe4cbf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd0f8b9910c05b25b831041c22c7df04d08a9baabb20b7d5b37d960440fe4cbf/rootfs","created":"2021-08-16T22:43:51.498813115Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cd0f8b9910c05b25b831041c22c7df04d08a9baabb20b7d5b37d960440fe4cbf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_a88a523b-5707-46b9-b7cf-6931db0d4487"},"owner":"roo
t"},{"ociVersion":"1.0.2-dev","id":"def14af399b574c5412b6c393d65b7d0b6ae9f71d7709f25fe7a7956acf00e0c","pid":6181,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/def14af399b574c5412b6c393d65b7d0b6ae9f71d7709f25fe7a7956acf00e0c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/def14af399b574c5412b6c393d65b7d0b6ae9f71d7709f25fe7a7956acf00e0c/rootfs","created":"2021-08-16T22:43:48.773575867Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e49dbc663b365fe821c92558277f0c3ae21637c6a27e32074f2b85f6fdd0fde8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79","pid":6676,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dfd15db9428f264623cd687de765401bc3b7f7293c
3ecad264cc26e1ff22cd79/rootfs","created":"2021-08-16T22:43:52.897941235Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"cd0f8b9910c05b25b831041c22c7df04d08a9baabb20b7d5b37d960440fe4cbf"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e49dbc663b365fe821c92558277f0c3ae21637c6a27e32074f2b85f6fdd0fde8","pid":6126,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e49dbc663b365fe821c92558277f0c3ae21637c6a27e32074f2b85f6fdd0fde8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e49dbc663b365fe821c92558277f0c3ae21637c6a27e32074f2b85f6fdd0fde8/rootfs","created":"2021-08-16T22:43:47.805663518Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e49dbc663b365fe821c92558277f0c3ae21637c6a27e32074f2b85f6fdd0fde8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-jvhn9_3c48c2dc-4beb-4359-aadc-1365
db48feac"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ee6e3ad90fb4e19819658b68382724e6a5cf8fe284be81a4b38efc7636b1dde1","pid":5449,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee6e3ad90fb4e19819658b68382724e6a5cf8fe284be81a4b38efc7636b1dde1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee6e3ad90fb4e19819658b68382724e6a5cf8fe284be81a4b38efc7636b1dde1/rootfs","created":"2021-08-16T22:43:22.926222712Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3a8ee57b53e2eea1b5e785b06a20b761d3cb64d70e8a2cea5617170c35c023ed"},"owner":"root"}]
	I0816 22:44:05.368403   20391 cri.go:113] list returned 18 containers
	I0816 22:44:05.368418   20391 cri.go:116] container: {ID:091f1b89bc6ac929a99a8c2d63bb67a16934b4902698c9995a38073d0dda1f1f Status:paused}
	I0816 22:44:05.368429   20391 cri.go:122] skipping {091f1b89bc6ac929a99a8c2d63bb67a16934b4902698c9995a38073d0dda1f1f paused}: state = "paused", want "running"
	I0816 22:44:05.368441   20391 cri.go:116] container: {ID:0bbbf0f687140f3a58e4d946ef636533bb339483f7f898eb3a8752eed74d652c Status:running}
	I0816 22:44:05.368450   20391 cri.go:118] skipping 0bbbf0f687140f3a58e4d946ef636533bb339483f7f898eb3a8752eed74d652c - not in ps
	I0816 22:44:05.368454   20391 cri.go:116] container: {ID:1edc418e603d15886d25351e515b8f0101a592ddb903d3d6501fb8c5f472d2bf Status:running}
	I0816 22:44:05.368459   20391 cri.go:118] skipping 1edc418e603d15886d25351e515b8f0101a592ddb903d3d6501fb8c5f472d2bf - not in ps
	I0816 22:44:05.368465   20391 cri.go:116] container: {ID:232baf5e6e81875c27def143a91aee80a3e1ef739a9f3162089780ea14ea7a3a Status:running}
	I0816 22:44:05.368474   20391 cri.go:118] skipping 232baf5e6e81875c27def143a91aee80a3e1ef739a9f3162089780ea14ea7a3a - not in ps
	I0816 22:44:05.368480   20391 cri.go:116] container: {ID:35bd4253798955a15f9479aac90343746cf3663478ae758392ad47246b5e967a Status:running}
	I0816 22:44:05.368484   20391 cri.go:118] skipping 35bd4253798955a15f9479aac90343746cf3663478ae758392ad47246b5e967a - not in ps
	I0816 22:44:05.368491   20391 cri.go:116] container: {ID:3a8ee57b53e2eea1b5e785b06a20b761d3cb64d70e8a2cea5617170c35c023ed Status:running}
	I0816 22:44:05.368495   20391 cri.go:118] skipping 3a8ee57b53e2eea1b5e785b06a20b761d3cb64d70e8a2cea5617170c35c023ed - not in ps
	I0816 22:44:05.368501   20391 cri.go:116] container: {ID:52b506e6ea8d483556e6d99b63d5182ab7f64ecf9decd63acb67b80ad56ff9c9 Status:running}
	I0816 22:44:05.368506   20391 cri.go:118] skipping 52b506e6ea8d483556e6d99b63d5182ab7f64ecf9decd63acb67b80ad56ff9c9 - not in ps
	I0816 22:44:05.368511   20391 cri.go:116] container: {ID:57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8 Status:running}
	I0816 22:44:05.368516   20391 cri.go:116] container: {ID:59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5 Status:running}
	I0816 22:44:05.368523   20391 cri.go:116] container: {ID:91c23817fb7abda2b7c16241219d51e4ed678c5cb58792aa6ddbebafd8c8e1d7 Status:running}
	I0816 22:44:05.368527   20391 cri.go:116] container: {ID:94f02cd4190110c6a14cb72203c80ac478f0f42f6d9c58cdd09fafa3f66255e6 Status:running}
	I0816 22:44:05.368534   20391 cri.go:118] skipping 94f02cd4190110c6a14cb72203c80ac478f0f42f6d9c58cdd09fafa3f66255e6 - not in ps
	I0816 22:44:05.368538   20391 cri.go:116] container: {ID:b98cd396fdb126a029db11a0dc2fba55c059c5e1325b67b4a634d563af29288b Status:running}
	I0816 22:44:05.368541   20391 cri.go:116] container: {ID:ba0dd77bce835f39513823292ed6e73aed2799df561589783949cdd279ca198f Status:running}
	I0816 22:44:05.368551   20391 cri.go:118] skipping ba0dd77bce835f39513823292ed6e73aed2799df561589783949cdd279ca198f - not in ps
	I0816 22:44:05.368557   20391 cri.go:116] container: {ID:cd0f8b9910c05b25b831041c22c7df04d08a9baabb20b7d5b37d960440fe4cbf Status:running}
	I0816 22:44:05.368562   20391 cri.go:118] skipping cd0f8b9910c05b25b831041c22c7df04d08a9baabb20b7d5b37d960440fe4cbf - not in ps
	I0816 22:44:05.368568   20391 cri.go:116] container: {ID:def14af399b574c5412b6c393d65b7d0b6ae9f71d7709f25fe7a7956acf00e0c Status:running}
	I0816 22:44:05.368572   20391 cri.go:116] container: {ID:dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79 Status:running}
	I0816 22:44:05.368579   20391 cri.go:116] container: {ID:e49dbc663b365fe821c92558277f0c3ae21637c6a27e32074f2b85f6fdd0fde8 Status:running}
	I0816 22:44:05.368583   20391 cri.go:118] skipping e49dbc663b365fe821c92558277f0c3ae21637c6a27e32074f2b85f6fdd0fde8 - not in ps
	I0816 22:44:05.368589   20391 cri.go:116] container: {ID:ee6e3ad90fb4e19819658b68382724e6a5cf8fe284be81a4b38efc7636b1dde1 Status:running}
	I0816 22:44:05.368629   20391 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8
	I0816 22:44:05.391136   20391 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8 59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5
	I0816 22:44:05.418810   20391 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8 59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:44:05Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0816 22:44:05.959294   20391 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:44:05.970422   20391 pause.go:50] kubelet running: false
	I0816 22:44:05.970483   20391 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:44:06.154106   20391 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:44:06.154211   20391 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:44:06.281063   20391 cri.go:76] found id: "dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79"
	I0816 22:44:06.281094   20391 cri.go:76] found id: "def14af399b574c5412b6c393d65b7d0b6ae9f71d7709f25fe7a7956acf00e0c"
	I0816 22:44:06.281102   20391 cri.go:76] found id: "59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5"
	I0816 22:44:06.281108   20391 cri.go:76] found id: "57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8"
	I0816 22:44:06.281113   20391 cri.go:76] found id: "91c23817fb7abda2b7c16241219d51e4ed678c5cb58792aa6ddbebafd8c8e1d7"
	I0816 22:44:06.281119   20391 cri.go:76] found id: "b98cd396fdb126a029db11a0dc2fba55c059c5e1325b67b4a634d563af29288b"
	I0816 22:44:06.281124   20391 cri.go:76] found id: "ee6e3ad90fb4e19819658b68382724e6a5cf8fe284be81a4b38efc7636b1dde1"
	I0816 22:44:06.281129   20391 cri.go:76] found id: "cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363"
	I0816 22:44:06.281135   20391 cri.go:76] found id: "091f1b89bc6ac929a99a8c2d63bb67a16934b4902698c9995a38073d0dda1f1f"
	I0816 22:44:06.281155   20391 cri.go:76] found id: ""
	I0816 22:44:06.281213   20391 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0816 22:44:06.328104   20391 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"091f1b89bc6ac929a99a8c2d63bb67a16934b4902698c9995a38073d0dda1f1f","pid":6652,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/091f1b89bc6ac929a99a8c2d63bb67a16934b4902698c9995a38073d0dda1f1f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/091f1b89bc6ac929a99a8c2d63bb67a16934b4902698c9995a38073d0dda1f1f/rootfs","created":"2021-08-16T22:43:52.766767443Z","annotations":{"io.kubernetes.cri.container-name":"kubernetes-dashboard","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"35bd4253798955a15f9479aac90343746cf3663478ae758392ad47246b5e967a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"0bbbf0f687140f3a58e4d946ef636533bb339483f7f898eb3a8752eed74d652c","pid":5392,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/0bbbf0f687140f3a58e4d946ef636533bb339483f7f898eb3a8752eed74d652c","rootfs":"/run/containerd/io.container
d.runtime.v2.task/k8s.io/0bbbf0f687140f3a58e4d946ef636533bb339483f7f898eb3a8752eed74d652c/rootfs","created":"2021-08-16T22:43:22.287515984Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"0bbbf0f687140f3a58e4d946ef636533bb339483f7f898eb3a8752eed74d652c","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-default-k8s-different-port-20210816223418-6986_bf7898b008f5522b40af5e4944af35da"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"1edc418e603d15886d25351e515b8f0101a592ddb903d3d6501fb8c5f472d2bf","pid":5383,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1edc418e603d15886d25351e515b8f0101a592ddb903d3d6501fb8c5f472d2bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/1edc418e603d15886d25351e515b8f0101a592ddb903d3d6501fb8c5f472d2bf/rootfs","created":"2021-08-16T22:43:22.257807817Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"1edc41
8e603d15886d25351e515b8f0101a592ddb903d3d6501fb8c5f472d2bf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-default-k8s-different-port-20210816223418-6986_f5b0868343e4393a2db84d0a125fb9b8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"232baf5e6e81875c27def143a91aee80a3e1ef739a9f3162089780ea14ea7a3a","pid":5377,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/232baf5e6e81875c27def143a91aee80a3e1ef739a9f3162089780ea14ea7a3a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/232baf5e6e81875c27def143a91aee80a3e1ef739a9f3162089780ea14ea7a3a/rootfs","created":"2021-08-16T22:43:22.268582228Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"232baf5e6e81875c27def143a91aee80a3e1ef739a9f3162089780ea14ea7a3a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-default-k8s-different-port-20210816223418-6986_2cec70d99baaffdb32771ad61e5d108f"},"owner":"root"},{"ociVersion":"1
.0.2-dev","id":"35bd4253798955a15f9479aac90343746cf3663478ae758392ad47246b5e967a","pid":6523,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35bd4253798955a15f9479aac90343746cf3663478ae758392ad47246b5e967a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/35bd4253798955a15f9479aac90343746cf3663478ae758392ad47246b5e967a/rootfs","created":"2021-08-16T22:43:51.910791273Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"35bd4253798955a15f9479aac90343746cf3663478ae758392ad47246b5e967a","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_kubernetes-dashboard-6fcdf4f6d-7gstk_806d3966-d956-400b-b825-eb1393026138"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3a8ee57b53e2eea1b5e785b06a20b761d3cb64d70e8a2cea5617170c35c023ed","pid":5369,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3a8ee57b53e2eea1b5e785b06a20b761d3cb64d70e8a2cea5617170c35c023ed","rootfs":"/run/cont
ainerd/io.containerd.runtime.v2.task/k8s.io/3a8ee57b53e2eea1b5e785b06a20b761d3cb64d70e8a2cea5617170c35c023ed/rootfs","created":"2021-08-16T22:43:22.261992822Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3a8ee57b53e2eea1b5e785b06a20b761d3cb64d70e8a2cea5617170c35c023ed","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-different-port-20210816223418-6986_9666ed4138865243da78558f0114d546"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"52b506e6ea8d483556e6d99b63d5182ab7f64ecf9decd63acb67b80ad56ff9c9","pid":6342,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/52b506e6ea8d483556e6d99b63d5182ab7f64ecf9decd63acb67b80ad56ff9c9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/52b506e6ea8d483556e6d99b63d5182ab7f64ecf9decd63acb67b80ad56ff9c9/rootfs","created":"2021-08-16T22:43:51.130909911Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-i
d":"52b506e6ea8d483556e6d99b63d5182ab7f64ecf9decd63acb67b80ad56ff9c9","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_metrics-server-7c784ccb57-pbxnr_fa2d27a5-b243-4a8f-9450-b834d1ce5bb0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8","pid":5562,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8/rootfs","created":"2021-08-16T22:43:23.672123264Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"1edc418e603d15886d25351e515b8f0101a592ddb903d3d6501fb8c5f472d2bf"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5","pid":5884,"status":"running","bundle":"/run/c
ontainerd/io.containerd.runtime.v2.task/k8s.io/59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5/rootfs","created":"2021-08-16T22:43:46.387583926Z","annotations":{"io.kubernetes.cri.container-name":"kube-proxy","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"ba0dd77bce835f39513823292ed6e73aed2799df561589783949cdd279ca198f"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"91c23817fb7abda2b7c16241219d51e4ed678c5cb58792aa6ddbebafd8c8e1d7","pid":5526,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/91c23817fb7abda2b7c16241219d51e4ed678c5cb58792aa6ddbebafd8c8e1d7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/91c23817fb7abda2b7c16241219d51e4ed678c5cb58792aa6ddbebafd8c8e1d7/rootfs","created":"2021-08-16T22:43:23.510844339Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.
kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"232baf5e6e81875c27def143a91aee80a3e1ef739a9f3162089780ea14ea7a3a"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"94f02cd4190110c6a14cb72203c80ac478f0f42f6d9c58cdd09fafa3f66255e6","pid":6529,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/94f02cd4190110c6a14cb72203c80ac478f0f42f6d9c58cdd09fafa3f66255e6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/94f02cd4190110c6a14cb72203c80ac478f0f42f6d9c58cdd09fafa3f66255e6/rootfs","created":"2021-08-16T22:43:51.942669791Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"94f02cd4190110c6a14cb72203c80ac478f0f42f6d9c58cdd09fafa3f66255e6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kubernetes-dashboard_dashboard-metrics-scraper-8685c45546-nctsf_3023a30f-e167-4d2d-9cb7-5f01b3a89700"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"b98cd396fdb126a029db11a0dc2fba55c059c5e1325b67b4a634d563af29288b
","pid":5524,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b98cd396fdb126a029db11a0dc2fba55c059c5e1325b67b4a634d563af29288b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b98cd396fdb126a029db11a0dc2fba55c059c5e1325b67b4a634d563af29288b/rootfs","created":"2021-08-16T22:43:23.422552219Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"0bbbf0f687140f3a58e4d946ef636533bb339483f7f898eb3a8752eed74d652c"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ba0dd77bce835f39513823292ed6e73aed2799df561589783949cdd279ca198f","pid":5850,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba0dd77bce835f39513823292ed6e73aed2799df561589783949cdd279ca198f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba0dd77bce835f39513823292ed6e73aed2799df561589783949cdd279ca198f/rootfs","created":"2021-08-16T22:43:46.091778395Z","annotatio
ns":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ba0dd77bce835f39513823292ed6e73aed2799df561589783949cdd279ca198f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-proxy-qhsq8_9abb9351-b721-48bb-94b9-887b5afc7584"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"cd0f8b9910c05b25b831041c22c7df04d08a9baabb20b7d5b37d960440fe4cbf","pid":6338,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd0f8b9910c05b25b831041c22c7df04d08a9baabb20b7d5b37d960440fe4cbf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd0f8b9910c05b25b831041c22c7df04d08a9baabb20b7d5b37d960440fe4cbf/rootfs","created":"2021-08-16T22:43:51.498813115Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"cd0f8b9910c05b25b831041c22c7df04d08a9baabb20b7d5b37d960440fe4cbf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_storage-provisioner_a88a523b-5707-46b9-b7cf-6931db0d4487"},"owner":"root
"},{"ociVersion":"1.0.2-dev","id":"def14af399b574c5412b6c393d65b7d0b6ae9f71d7709f25fe7a7956acf00e0c","pid":6181,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/def14af399b574c5412b6c393d65b7d0b6ae9f71d7709f25fe7a7956acf00e0c","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/def14af399b574c5412b6c393d65b7d0b6ae9f71d7709f25fe7a7956acf00e0c/rootfs","created":"2021-08-16T22:43:48.773575867Z","annotations":{"io.kubernetes.cri.container-name":"coredns","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"e49dbc663b365fe821c92558277f0c3ae21637c6a27e32074f2b85f6fdd0fde8"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79","pid":6676,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dfd15db9428f264623cd687de765401bc3b7f7293c3
ecad264cc26e1ff22cd79/rootfs","created":"2021-08-16T22:43:52.897941235Z","annotations":{"io.kubernetes.cri.container-name":"storage-provisioner","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"cd0f8b9910c05b25b831041c22c7df04d08a9baabb20b7d5b37d960440fe4cbf"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e49dbc663b365fe821c92558277f0c3ae21637c6a27e32074f2b85f6fdd0fde8","pid":6126,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e49dbc663b365fe821c92558277f0c3ae21637c6a27e32074f2b85f6fdd0fde8","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e49dbc663b365fe821c92558277f0c3ae21637c6a27e32074f2b85f6fdd0fde8/rootfs","created":"2021-08-16T22:43:47.805663518Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"e49dbc663b365fe821c92558277f0c3ae21637c6a27e32074f2b85f6fdd0fde8","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_coredns-558bd4d5db-jvhn9_3c48c2dc-4beb-4359-aadc-1365d
b48feac"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ee6e3ad90fb4e19819658b68382724e6a5cf8fe284be81a4b38efc7636b1dde1","pid":5449,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee6e3ad90fb4e19819658b68382724e6a5cf8fe284be81a4b38efc7636b1dde1","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee6e3ad90fb4e19819658b68382724e6a5cf8fe284be81a4b38efc7636b1dde1/rootfs","created":"2021-08-16T22:43:22.926222712Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3a8ee57b53e2eea1b5e785b06a20b761d3cb64d70e8a2cea5617170c35c023ed"},"owner":"root"}]
	I0816 22:44:06.328270   20391 cri.go:113] list returned 18 containers
	I0816 22:44:06.328280   20391 cri.go:116] container: {ID:091f1b89bc6ac929a99a8c2d63bb67a16934b4902698c9995a38073d0dda1f1f Status:paused}
	I0816 22:44:06.328290   20391 cri.go:122] skipping {091f1b89bc6ac929a99a8c2d63bb67a16934b4902698c9995a38073d0dda1f1f paused}: state = "paused", want "running"
	I0816 22:44:06.328298   20391 cri.go:116] container: {ID:0bbbf0f687140f3a58e4d946ef636533bb339483f7f898eb3a8752eed74d652c Status:running}
	I0816 22:44:06.328303   20391 cri.go:118] skipping 0bbbf0f687140f3a58e4d946ef636533bb339483f7f898eb3a8752eed74d652c - not in ps
	I0816 22:44:06.328307   20391 cri.go:116] container: {ID:1edc418e603d15886d25351e515b8f0101a592ddb903d3d6501fb8c5f472d2bf Status:running}
	I0816 22:44:06.328311   20391 cri.go:118] skipping 1edc418e603d15886d25351e515b8f0101a592ddb903d3d6501fb8c5f472d2bf - not in ps
	I0816 22:44:06.328315   20391 cri.go:116] container: {ID:232baf5e6e81875c27def143a91aee80a3e1ef739a9f3162089780ea14ea7a3a Status:running}
	I0816 22:44:06.328319   20391 cri.go:118] skipping 232baf5e6e81875c27def143a91aee80a3e1ef739a9f3162089780ea14ea7a3a - not in ps
	I0816 22:44:06.328322   20391 cri.go:116] container: {ID:35bd4253798955a15f9479aac90343746cf3663478ae758392ad47246b5e967a Status:running}
	I0816 22:44:06.328326   20391 cri.go:118] skipping 35bd4253798955a15f9479aac90343746cf3663478ae758392ad47246b5e967a - not in ps
	I0816 22:44:06.328330   20391 cri.go:116] container: {ID:3a8ee57b53e2eea1b5e785b06a20b761d3cb64d70e8a2cea5617170c35c023ed Status:running}
	I0816 22:44:06.328336   20391 cri.go:118] skipping 3a8ee57b53e2eea1b5e785b06a20b761d3cb64d70e8a2cea5617170c35c023ed - not in ps
	I0816 22:44:06.328339   20391 cri.go:116] container: {ID:52b506e6ea8d483556e6d99b63d5182ab7f64ecf9decd63acb67b80ad56ff9c9 Status:running}
	I0816 22:44:06.328343   20391 cri.go:118] skipping 52b506e6ea8d483556e6d99b63d5182ab7f64ecf9decd63acb67b80ad56ff9c9 - not in ps
	I0816 22:44:06.328346   20391 cri.go:116] container: {ID:57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8 Status:paused}
	I0816 22:44:06.328351   20391 cri.go:122] skipping {57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8 paused}: state = "paused", want "running"
	I0816 22:44:06.328356   20391 cri.go:116] container: {ID:59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5 Status:running}
	I0816 22:44:06.328359   20391 cri.go:116] container: {ID:91c23817fb7abda2b7c16241219d51e4ed678c5cb58792aa6ddbebafd8c8e1d7 Status:running}
	I0816 22:44:06.328364   20391 cri.go:116] container: {ID:94f02cd4190110c6a14cb72203c80ac478f0f42f6d9c58cdd09fafa3f66255e6 Status:running}
	I0816 22:44:06.328371   20391 cri.go:118] skipping 94f02cd4190110c6a14cb72203c80ac478f0f42f6d9c58cdd09fafa3f66255e6 - not in ps
	I0816 22:44:06.328374   20391 cri.go:116] container: {ID:b98cd396fdb126a029db11a0dc2fba55c059c5e1325b67b4a634d563af29288b Status:running}
	I0816 22:44:06.328381   20391 cri.go:116] container: {ID:ba0dd77bce835f39513823292ed6e73aed2799df561589783949cdd279ca198f Status:running}
	I0816 22:44:06.328385   20391 cri.go:118] skipping ba0dd77bce835f39513823292ed6e73aed2799df561589783949cdd279ca198f - not in ps
	I0816 22:44:06.328393   20391 cri.go:116] container: {ID:cd0f8b9910c05b25b831041c22c7df04d08a9baabb20b7d5b37d960440fe4cbf Status:running}
	I0816 22:44:06.328397   20391 cri.go:118] skipping cd0f8b9910c05b25b831041c22c7df04d08a9baabb20b7d5b37d960440fe4cbf - not in ps
	I0816 22:44:06.328403   20391 cri.go:116] container: {ID:def14af399b574c5412b6c393d65b7d0b6ae9f71d7709f25fe7a7956acf00e0c Status:running}
	I0816 22:44:06.328407   20391 cri.go:116] container: {ID:dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79 Status:running}
	I0816 22:44:06.328413   20391 cri.go:116] container: {ID:e49dbc663b365fe821c92558277f0c3ae21637c6a27e32074f2b85f6fdd0fde8 Status:running}
	I0816 22:44:06.328417   20391 cri.go:118] skipping e49dbc663b365fe821c92558277f0c3ae21637c6a27e32074f2b85f6fdd0fde8 - not in ps
	I0816 22:44:06.328423   20391 cri.go:116] container: {ID:ee6e3ad90fb4e19819658b68382724e6a5cf8fe284be81a4b38efc7636b1dde1 Status:running}
	I0816 22:44:06.328475   20391 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5
	I0816 22:44:06.350537   20391 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5 91c23817fb7abda2b7c16241219d51e4ed678c5cb58792aa6ddbebafd8c8e1d7
	I0816 22:44:06.385201   20391 out.go:177] 
	W0816 22:44:06.385354   20391 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5 91c23817fb7abda2b7c16241219d51e4ed678c5cb58792aa6ddbebafd8c8e1d7: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:44:06Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5 91c23817fb7abda2b7c16241219d51e4ed678c5cb58792aa6ddbebafd8c8e1d7: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:44:06Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0816 22:44:06.385378   20391 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0816 22:44:06.387854   20391 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0816 22:44:06.389425   20391 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p default-k8s-different-port-20210816223418-6986 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210816223418-6986 -n default-k8s-different-port-20210816223418-6986
E0816 22:44:08.368986    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816222224-6986/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210816223418-6986 -n default-k8s-different-port-20210816223418-6986: exit status 2 (14.440510454s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:44:20.837044   20444 status.go:422] Error apiserver status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20210816223418-6986 logs -n 25

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p default-k8s-different-port-20210816223418-6986 logs -n 25: exit status 110 (11.692416937s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| delete  | -p                                                | disable-driver-mounts-20210816223418-6986      | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:18 UTC | Mon, 16 Aug 2021 22:34:18 UTC |
	|         | disable-driver-mounts-20210816223418-6986         |                                                |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:31:54 UTC | Mon, 16 Aug 2021 22:34:34 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:44 UTC | Mon, 16 Aug 2021 22:34:45 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816223156-6986                 | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:31:56 UTC | Mon, 16 Aug 2021 22:35:04 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:33:33 UTC | Mon, 16 Aug 2021 22:35:08 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:16 UTC | Mon, 16 Aug 2021 22:35:17 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:19 UTC | Mon, 16 Aug 2021 22:35:20 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:18 UTC | Mon, 16 Aug 2021 22:35:42 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:51 UTC | Mon, 16 Aug 2021 22:35:52 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:45 UTC | Mon, 16 Aug 2021 22:36:18 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:18 UTC | Mon, 16 Aug 2021 22:36:18 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:17 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:20 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:52 UTC | Mon, 16 Aug 2021 22:37:25 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:37:25 UTC | Mon, 16 Aug 2021 22:37:25 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816223156-6986                 | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:42:31 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:42:42 UTC | Mon, 16 Aug 2021 22:42:42 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:18 UTC | Mon, 16 Aug 2021 22:43:37 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:43:44 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:43:47 UTC | Mon, 16 Aug 2021 22:43:47 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:37:25 UTC | Mon, 16 Aug 2021 22:43:52 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:43:55 UTC | Mon, 16 Aug 2021 22:43:55 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:44:03 UTC | Mon, 16 Aug 2021 22:44:03 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:37:25
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:37:25.306577   19204 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:37:25.306653   19204 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:37:25.306656   19204 out.go:311] Setting ErrFile to fd 2...
	I0816 22:37:25.306663   19204 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:37:25.307072   19204 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:37:25.307547   19204 out.go:305] Setting JSON to false
	I0816 22:37:25.351342   19204 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":4807,"bootTime":1629148638,"procs":188,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:37:25.351461   19204 start.go:121] virtualization: kvm guest
	I0816 22:37:25.353955   19204 out.go:177] * [default-k8s-different-port-20210816223418-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:37:25.355393   19204 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:37:25.354127   19204 notify.go:169] Checking for updates...
	I0816 22:37:25.356781   19204 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:37:25.358158   19204 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:37:25.364678   19204 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:37:25.365267   19204 config.go:177] Loaded profile config "default-k8s-different-port-20210816223418-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:37:25.365899   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:37:25.365956   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:37:25.381650   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46493
	I0816 22:37:25.382065   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:37:25.382798   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:37:25.382820   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:37:25.383330   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:37:25.383519   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:25.383721   19204 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:37:25.384192   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:37:25.384260   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:37:25.401082   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44899
	I0816 22:37:25.402507   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:37:25.403115   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:37:25.403179   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:37:25.403663   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:37:25.403903   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:25.439751   19204 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 22:37:25.439781   19204 start.go:278] selected driver: kvm2
	I0816 22:37:25.439788   19204 start.go:751] validating driver "kvm2" against &{Name:default-k8s-different-port-20210816223418-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernete
sConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210816223418-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kube
let:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:37:25.439905   19204 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 22:37:25.441282   19204 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:37:25.441453   19204 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 22:37:25.455762   19204 install.go:137] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2 version is 1.22.0
	I0816 22:37:25.456183   19204 start_flags.go:697] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0816 22:37:25.456219   19204 cni.go:93] Creating CNI manager for ""
	I0816 22:37:25.456234   19204 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:37:25.456245   19204 start_flags.go:277] config:
	{Name:default-k8s-different-port-20210816223418-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-2021
0816223418-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop
:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:37:25.456384   19204 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:37:25.458420   19204 out.go:177] * Starting control plane node default-k8s-different-port-20210816223418-6986 in cluster default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.458447   19204 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:37:25.458480   19204 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0816 22:37:25.458495   19204 cache.go:56] Caching tarball of preloaded images
	I0816 22:37:25.458602   19204 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 22:37:25.458622   19204 cache.go:59] Finished verifying existence of preloaded tar for  v1.21.3 on containerd
	I0816 22:37:25.458779   19204 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/config.json ...
	I0816 22:37:25.459003   19204 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:37:25.459033   19204 start.go:313] acquiring machines lock for default-k8s-different-port-20210816223418-6986: {Name:mk808edd60d1305a42bb85791729eff4573dbb15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 22:37:25.459101   19204 start.go:317] acquired machines lock for "default-k8s-different-port-20210816223418-6986" in 48.071µs
	I0816 22:37:25.459123   19204 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:37:25.459131   19204 fix.go:55] fixHost starting: 
	I0816 22:37:25.459569   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:37:25.459614   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:37:25.473634   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44349
	I0816 22:37:25.474153   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:37:25.474765   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:37:25.474786   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:37:25.475205   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:37:25.475409   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:25.475621   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetState
	I0816 22:37:25.479447   19204 fix.go:108] recreateIfNeeded on default-k8s-different-port-20210816223418-6986: state=Stopped err=<nil>
	I0816 22:37:25.479498   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	W0816 22:37:25.479660   19204 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:37:21.322104   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:21.822129   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:22.321669   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:22.821492   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:23.322452   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:23.822419   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:24.322141   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:24.821615   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:24.856062   18923 api_server.go:70] duration metric: took 8.045517198s to wait for apiserver process to appear ...
	I0816 22:37:24.856091   18923 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:37:24.856103   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:24.856734   18923 api_server.go:255] stopped: https://192.168.116.66:8443/healthz: Get "https://192.168.116.66:8443/healthz": dial tcp 192.168.116.66:8443: connect: connection refused
	I0816 22:37:25.357442   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:22.382628   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Getting to WaitForSSH function...
	I0816 22:37:22.388062   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:22.388472   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:22.388501   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:22.388736   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Using SSH client type: external
	I0816 22:37:22.388774   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa (-rw-------)
	I0816 22:37:22.388825   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.105.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:37:22.388851   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | About to run SSH command:
	I0816 22:37:22.388868   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | exit 0
	I0816 22:37:23.527862   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | SSH cmd err, output: <nil>: 
	I0816 22:37:23.528297   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetConfigRaw
	I0816 22:37:23.529175   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetIP
	I0816 22:37:23.535445   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.535831   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.535862   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.536325   18929 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/config.json ...
	I0816 22:37:23.536603   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:23.536838   18929 machine.go:88] provisioning docker machine ...
	I0816 22:37:23.536860   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:23.537120   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetMachineName
	I0816 22:37:23.537298   18929 buildroot.go:166] provisioning hostname "embed-certs-20210816223333-6986"
	I0816 22:37:23.537328   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetMachineName
	I0816 22:37:23.537497   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:23.543084   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.543520   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.543560   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.543770   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:23.543953   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:23.544122   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:23.544284   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:23.544470   18929 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:23.544676   18929 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.105.129 22 <nil> <nil>}
	I0816 22:37:23.544698   18929 main.go:130] libmachine: About to run SSH command:
	sudo hostname embed-certs-20210816223333-6986 && echo "embed-certs-20210816223333-6986" | sudo tee /etc/hostname
	I0816 22:37:23.682935   18929 main.go:130] libmachine: SSH cmd err, output: <nil>: embed-certs-20210816223333-6986
	
	I0816 22:37:23.682982   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:23.689555   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.690034   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.690071   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.690297   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:23.690526   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:23.690738   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:23.690910   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:23.691116   18929 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:23.691321   18929 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.105.129 22 <nil> <nil>}
	I0816 22:37:23.691351   18929 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-20210816223333-6986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-20210816223333-6986/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-20210816223333-6986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:37:23.826330   18929 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:37:23.826357   18929 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:37:23.826393   18929 buildroot.go:174] setting up certificates
	I0816 22:37:23.826403   18929 provision.go:83] configureAuth start
	I0816 22:37:23.826415   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetMachineName
	I0816 22:37:23.826673   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetIP
	I0816 22:37:23.832833   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.833221   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.833252   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.833505   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:23.839058   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.839437   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:23.839468   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:23.839721   18929 provision.go:138] copyHostCerts
	I0816 22:37:23.839785   18929 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:37:23.839801   18929 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:37:23.839858   18929 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:37:23.840010   18929 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:37:23.840023   18929 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:37:23.840050   18929 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:37:23.840148   18929 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:37:23.840160   18929 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:37:23.840181   18929 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
	I0816 22:37:23.840251   18929 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.embed-certs-20210816223333-6986 san=[192.168.105.129 192.168.105.129 localhost 127.0.0.1 minikube embed-certs-20210816223333-6986]
	I0816 22:37:24.071276   18929 provision.go:172] copyRemoteCerts
	I0816 22:37:24.071347   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:37:24.071383   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.077584   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.078065   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.078133   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.078307   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.078500   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.078636   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.078743   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:37:24.168996   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:37:24.190581   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0816 22:37:24.211894   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:37:24.234970   18929 provision.go:86] duration metric: configureAuth took 408.533613ms
	I0816 22:37:24.235001   18929 buildroot.go:189] setting minikube options for container-runtime
	I0816 22:37:24.235282   18929 config.go:177] Loaded profile config "embed-certs-20210816223333-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:37:24.235303   18929 machine.go:91] provisioned docker machine in 698.450664ms
	I0816 22:37:24.235313   18929 start.go:267] post-start starting for "embed-certs-20210816223333-6986" (driver="kvm2")
	I0816 22:37:24.235321   18929 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:37:24.235352   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.235711   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:37:24.235748   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.242219   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.242647   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.242677   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.242968   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.243197   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.243376   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.243542   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:37:24.342244   18929 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:37:24.348430   18929 info.go:137] Remote host: Buildroot 2020.02.12
	I0816 22:37:24.348458   18929 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:37:24.348527   18929 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:37:24.348678   18929 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0816 22:37:24.348794   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:37:24.358370   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:37:24.378832   18929 start.go:270] post-start completed in 143.493882ms
	I0816 22:37:24.378891   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.379183   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.385172   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.385565   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.385596   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.385720   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.385936   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.386069   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.386238   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.386404   18929 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:24.386604   18929 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.105.129 22 <nil> <nil>}
	I0816 22:37:24.386621   18929 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0816 22:37:24.513150   18929 main.go:130] libmachine: SSH cmd err, output: <nil>: 1629153444.435910196
	
	I0816 22:37:24.513175   18929 fix.go:212] guest clock: 1629153444.435910196
	I0816 22:37:24.513185   18929 fix.go:225] Guest: 2021-08-16 22:37:24.435910196 +0000 UTC Remote: 2021-08-16 22:37:24.379164096 +0000 UTC m=+28.470229855 (delta=56.7461ms)
	I0816 22:37:24.513209   18929 fix.go:196] guest clock delta is within tolerance: 56.7461ms
	I0816 22:37:24.513220   18929 fix.go:57] fixHost completed within 14.813246061s
	I0816 22:37:24.513226   18929 start.go:80] releasing machines lock for "embed-certs-20210816223333-6986", held for 14.813280431s
	I0816 22:37:24.513267   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.513532   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetIP
	I0816 22:37:24.519703   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.520118   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.520149   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.520319   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.520528   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.521062   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:37:24.521300   18929 ssh_runner.go:149] Run: systemctl --version
	I0816 22:37:24.521326   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.521364   18929 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:37:24.521406   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:37:24.527844   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.527923   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.528257   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.528281   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.528308   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:24.528323   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:24.528556   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.528678   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:37:24.528724   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.528933   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:37:24.528943   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.529108   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:37:24.529179   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:37:24.529267   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:37:24.634682   18929 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:37:24.634891   18929 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:37:24.131199   18635 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:37:24.131267   18635 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:37:24.140028   18635 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:37:24.157600   18635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:37:24.171359   18635 system_pods.go:59] 8 kube-system pods found
	I0816 22:37:24.171398   18635 system_pods.go:61] "coredns-fb8b8dccf-qwcrg" [fd98f945-fee1-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171407   18635 system_pods.go:61] "etcd-old-k8s-version-20210816223154-6986" [1d77612e-fee2-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171414   18635 system_pods.go:61] "kube-apiserver-old-k8s-version-20210816223154-6986" [152107a2-fee2-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171420   18635 system_pods.go:61] "kube-controller-manager-old-k8s-version-20210816223154-6986" [8620a0da-fee2-11eb-b5b6-525400bf2371] Pending
	I0816 22:37:24.171426   18635 system_pods.go:61] "kube-proxy-nvb2s" [fdaa2b42-fee1-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171438   18635 system_pods.go:61] "kube-scheduler-old-k8s-version-20210816223154-6986" [1b1505e6-fee2-11eb-bb5b-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:37:24.171454   18635 system_pods.go:61] "metrics-server-8546d8b77b-gl6jr" [28801d4e-fee2-11eb-bb5b-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:37:24.171462   18635 system_pods.go:61] "storage-provisioner" [ff1e11f1-fee1-11eb-bb5b-525400bf2371] Running
	I0816 22:37:24.171469   18635 system_pods.go:74] duration metric: took 13.840978ms to wait for pod list to return data ...
	I0816 22:37:24.171481   18635 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:37:24.176303   18635 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:37:24.176347   18635 node_conditions.go:123] node cpu capacity is 2
	I0816 22:37:24.176360   18635 node_conditions.go:105] duration metric: took 4.872863ms to run NodePressure ...
	I0816 22:37:24.176376   18635 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:25.292041   18635 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (1.115642082s)
	I0816 22:37:25.292077   18635 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:37:25.325547   18635 kubeadm.go:746] kubelet initialised
	I0816 22:37:25.325574   18635 kubeadm.go:747] duration metric: took 33.485813ms waiting for restarted kubelet to initialise ...
	I0816 22:37:25.325590   18635 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:37:25.351142   18635 pod_ready.go:78] waiting up to 4m0s for pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:27.387702   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:25.482074   19204 out.go:177] * Restarting existing kvm2 VM for "default-k8s-different-port-20210816223418-6986" ...
	I0816 22:37:25.482104   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Start
	I0816 22:37:25.482316   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Ensuring networks are active...
	I0816 22:37:25.484598   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Ensuring network default is active
	I0816 22:37:25.485014   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Ensuring network mk-default-k8s-different-port-20210816223418-6986 is active
	I0816 22:37:25.485452   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Getting domain xml...
	I0816 22:37:25.487765   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Creating domain...
	I0816 22:37:25.923048   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Waiting to get IP...
	I0816 22:37:25.924065   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.924660   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Found IP for machine: 192.168.50.186
	I0816 22:37:25.924682   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Reserving static IP address...
	I0816 22:37:25.924701   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has current primary IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.925155   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "default-k8s-different-port-20210816223418-6986", mac: "52:54:00:ed:d4:05", ip: "192.168.50.186"} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:34:34 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:25.925187   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | skip adding static IP to network mk-default-k8s-different-port-20210816223418-6986 - found existing host DHCP lease matching {name: "default-k8s-different-port-20210816223418-6986", mac: "52:54:00:ed:d4:05", ip: "192.168.50.186"}
	I0816 22:37:25.925202   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Reserved static IP address: 192.168.50.186
	I0816 22:37:25.925219   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Waiting for SSH to be available...
	I0816 22:37:25.925234   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Getting to WaitForSSH function...
	I0816 22:37:25.930369   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.930660   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:34:34 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:25.930705   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:25.930802   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Using SSH client type: external
	I0816 22:37:25.930842   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa (-rw-------)
	I0816 22:37:25.930888   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:37:25.931010   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | About to run SSH command:
	I0816 22:37:25.931033   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | exit 0
	I0816 22:37:30.356304   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:37:30.356337   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:37:30.357361   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:30.544479   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:30.544514   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:30.857809   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:30.866881   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:30.866920   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:28.652395   18929 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.017437883s)
	I0816 22:37:28.652577   18929 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0816 22:37:28.652647   18929 ssh_runner.go:149] Run: which lz4
	I0816 22:37:28.657345   18929 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0816 22:37:28.662555   18929 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0816 22:37:28.662584   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (928970367 bytes)
	I0816 22:37:31.357641   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:31.385946   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:31.385974   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:31.857651   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:31.878038   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:31.878070   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:32.357730   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:32.371926   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:37:32.371954   18923 api_server.go:101] status: https://192.168.116.66:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:37:32.857204   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:37:32.867865   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 200:
	ok
	I0816 22:37:32.881085   18923 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:37:32.881113   18923 api_server.go:129] duration metric: took 8.025015474s to wait for apiserver health ...
	I0816 22:37:32.881124   18923 cni.go:93] Creating CNI manager for ""
	I0816 22:37:32.881132   18923 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:37:29.389763   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:31.391442   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:35.155848   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | SSH cmd err, output: exit status 255: 
	I0816 22:37:35.155882   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0816 22:37:35.155896   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | command : exit 0
	I0816 22:37:35.155905   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | err     : exit status 255
	I0816 22:37:35.155918   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | output  : 
	I0816 22:37:32.883184   18923 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:37:32.883268   18923 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:37:32.927942   18923 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:37:33.011939   18923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:37:33.043009   18923 system_pods.go:59] 8 kube-system pods found
	I0816 22:37:33.043056   18923 system_pods.go:61] "coredns-78fcd69978-nzf79" [a95afe1c-4f93-44a8-b669-b42c72f3500d] Running
	I0816 22:37:33.043064   18923 system_pods.go:61] "etcd-no-preload-20210816223156-6986" [fc40f0e0-16ef-4ba8-b5fd-17f4684d3a13] Running
	I0816 22:37:33.043076   18923 system_pods.go:61] "kube-apiserver-no-preload-20210816223156-6986" [f13df2c8-5aa8-49c3-89c0-b584ff8c62c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 22:37:33.043083   18923 system_pods.go:61] "kube-controller-manager-no-preload-20210816223156-6986" [8b866a1c-d283-4410-acbf-be2dbaa0f025] Running
	I0816 22:37:33.043094   18923 system_pods.go:61] "kube-proxy-64m6s" [fc5086fe-a671-4078-b76c-0c8f0656dca6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:37:33.043108   18923 system_pods.go:61] "kube-scheduler-no-preload-20210816223156-6986" [5db4c302-251a-47dc-90b9-424206ed445d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:37:33.043123   18923 system_pods.go:61] "metrics-server-7c784ccb57-44llk" [319102e5-661e-43bc-9c07-07463f6b1e41] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:37:33.043129   18923 system_pods.go:61] "storage-provisioner" [3da85640-a722-4ba1-a886-926bcaf81b8e] Running
	I0816 22:37:33.043140   18923 system_pods.go:74] duration metric: took 31.176037ms to wait for pod list to return data ...
	I0816 22:37:33.043149   18923 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:37:33.049500   18923 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:37:33.049531   18923 node_conditions.go:123] node cpu capacity is 2
	I0816 22:37:33.049544   18923 node_conditions.go:105] duration metric: took 6.385759ms to run NodePressure ...
	I0816 22:37:33.049562   18923 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:33.993434   18923 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:37:34.012191   18923 kubeadm.go:746] kubelet initialised
	I0816 22:37:34.012215   18923 kubeadm.go:747] duration metric: took 18.75429ms waiting for restarted kubelet to initialise ...
	I0816 22:37:34.012224   18923 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:37:34.033224   18923 pod_ready.go:78] waiting up to 4m0s for pod "coredns-78fcd69978-nzf79" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:34.059145   18923 pod_ready.go:92] pod "coredns-78fcd69978-nzf79" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:34.059169   18923 pod_ready.go:81] duration metric: took 25.912051ms waiting for pod "coredns-78fcd69978-nzf79" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:34.059183   18923 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:32.660993   18929 containerd.go:546] Took 4.003687 seconds to copy over tarball
	I0816 22:37:32.661054   18929 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 22:37:33.892216   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:36.388385   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:38.156062   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Getting to WaitForSSH function...
	I0816 22:37:38.161988   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:38.162321   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:34:34 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:38.162379   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:38.162468   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Using SSH client type: external
	I0816 22:37:38.162499   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa (-rw-------)
	I0816 22:37:38.162538   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.186 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:37:38.162552   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | About to run SSH command:
	I0816 22:37:38.162570   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | exit 0
	I0816 22:37:36.102180   18923 pod_ready.go:102] pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:38.889153   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:41.402823   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | SSH cmd err, output: <nil>: 
	I0816 22:37:41.403283   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetConfigRaw
	I0816 22:37:41.404010   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetIP
	I0816 22:37:41.410017   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.410394   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.410432   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.410693   19204 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/config.json ...
	I0816 22:37:41.410926   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:41.411142   19204 machine.go:88] provisioning docker machine ...
	I0816 22:37:41.411167   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:41.411335   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetMachineName
	I0816 22:37:41.411477   19204 buildroot.go:166] provisioning hostname "default-k8s-different-port-20210816223418-6986"
	I0816 22:37:41.411499   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetMachineName
	I0816 22:37:41.411620   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:41.416760   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.417121   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.417154   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.417291   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:41.417487   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:41.417620   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:41.417769   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:41.417933   19204 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:41.418151   19204 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 22:37:41.418167   19204 main.go:130] libmachine: About to run SSH command:
	sudo hostname default-k8s-different-port-20210816223418-6986 && echo "default-k8s-different-port-20210816223418-6986" | sudo tee /etc/hostname
	I0816 22:37:41.560416   19204 main.go:130] libmachine: SSH cmd err, output: <nil>: default-k8s-different-port-20210816223418-6986
	
	I0816 22:37:41.560449   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:41.566690   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.567028   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.567064   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.567351   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:41.567542   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:41.567703   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:41.567827   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:41.567996   19204 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:41.568193   19204 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 22:37:41.568221   19204 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-different-port-20210816223418-6986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-different-port-20210816223418-6986/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-different-port-20210816223418-6986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:37:41.743484   19204 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:37:41.743518   19204 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:37:41.743559   19204 buildroot.go:174] setting up certificates
	I0816 22:37:41.743576   19204 provision.go:83] configureAuth start
	I0816 22:37:41.743593   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetMachineName
	I0816 22:37:41.743895   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetIP
	I0816 22:37:41.750014   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.750423   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.750467   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.750809   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:41.756158   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.756536   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:41.756569   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:41.756717   19204 provision.go:138] copyHostCerts
	I0816 22:37:41.756789   19204 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:37:41.756799   19204 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:37:41.756862   19204 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:37:41.756962   19204 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:37:41.756972   19204 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:37:41.756994   19204 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:37:41.757071   19204 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:37:41.757082   19204 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:37:41.757102   19204 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
	I0816 22:37:41.757156   19204 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.default-k8s-different-port-20210816223418-6986 san=[192.168.50.186 192.168.50.186 localhost 127.0.0.1 minikube default-k8s-different-port-20210816223418-6986]
	I0816 22:37:42.356131   19204 provision.go:172] copyRemoteCerts
	I0816 22:37:42.356205   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:37:42.356250   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.362214   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.362513   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.362547   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.362780   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.362992   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.363219   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.363363   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:37:42.482862   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:37:42.512838   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1306 bytes)
	I0816 22:37:42.540047   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0816 22:37:42.568047   19204 provision.go:86] duration metric: configureAuth took 824.454088ms
	I0816 22:37:42.568077   19204 buildroot.go:189] setting minikube options for container-runtime
	I0816 22:37:42.568300   19204 config.go:177] Loaded profile config "default-k8s-different-port-20210816223418-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:37:42.568315   19204 machine.go:91] provisioned docker machine in 1.157156536s
	I0816 22:37:42.568324   19204 start.go:267] post-start starting for "default-k8s-different-port-20210816223418-6986" (driver="kvm2")
	I0816 22:37:42.568333   19204 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:37:42.568368   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.568715   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:37:42.568749   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.574488   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.574891   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.574928   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.575140   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.575339   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.575523   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.575710   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:37:42.676578   19204 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:37:42.682148   19204 info.go:137] Remote host: Buildroot 2020.02.12
	I0816 22:37:42.682181   19204 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:37:42.682247   19204 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:37:42.682409   19204 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0816 22:37:42.682558   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:37:42.691519   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:37:42.711453   19204 start.go:270] post-start completed in 143.110809ms
	I0816 22:37:42.711496   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.711732   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.718125   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.718511   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.718547   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.718832   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.719063   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.719246   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.719404   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.719588   19204 main.go:130] libmachine: Using SSH client type: native
	I0816 22:37:42.719762   19204 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.50.186 22 <nil> <nil>}
	I0816 22:37:42.719775   19204 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0816 22:37:42.864591   19204 main.go:130] libmachine: SSH cmd err, output: <nil>: 1629153462.785763979
	
	I0816 22:37:42.864617   19204 fix.go:212] guest clock: 1629153462.785763979
	I0816 22:37:42.864627   19204 fix.go:225] Guest: 2021-08-16 22:37:42.785763979 +0000 UTC Remote: 2021-08-16 22:37:42.711713193 +0000 UTC m=+17.455762277 (delta=74.050786ms)
	I0816 22:37:42.864651   19204 fix.go:196] guest clock delta is within tolerance: 74.050786ms
	I0816 22:37:42.864660   19204 fix.go:57] fixHost completed within 17.405528602s
	I0816 22:37:42.864666   19204 start.go:80] releasing machines lock for "default-k8s-different-port-20210816223418-6986", held for 17.405551891s
	I0816 22:37:42.864711   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.864961   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetIP
	I0816 22:37:42.871077   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.871460   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.871504   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.871781   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.871990   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.872747   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:37:42.873035   19204 ssh_runner.go:149] Run: systemctl --version
	I0816 22:37:42.873067   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.873387   19204 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:37:42.873431   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:37:42.881178   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.881737   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.882041   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.882067   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.882095   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:37:42.882114   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:37:42.882432   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.882476   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:37:42.882624   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.882654   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:37:42.882754   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.882821   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:37:42.882852   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:37:42.882932   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:37:42.983824   19204 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:37:42.983945   19204 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:37:41.792417   18923 pod_ready.go:102] pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:42.110388   18923 pod_ready.go:92] pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:42.110425   18923 pod_ready.go:81] duration metric: took 8.051231395s waiting for pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.110443   18923 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.128769   18923 pod_ready.go:92] pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:42.128789   18923 pod_ready.go:81] duration metric: took 18.337432ms waiting for pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.128804   18923 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.137520   18923 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:42.137541   18923 pod_ready.go:81] duration metric: took 8.728281ms waiting for pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.137554   18923 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-64m6s" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.158798   18923 pod_ready.go:92] pod "kube-proxy-64m6s" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:42.158877   18923 pod_ready.go:81] duration metric: took 21.313805ms waiting for pod "kube-proxy-64m6s" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:42.158908   18923 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:44.512973   18923 pod_ready.go:102] pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:45.697026   18923 pod_ready.go:92] pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:37:45.697054   18923 pod_ready.go:81] duration metric: took 3.538123235s waiting for pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:45.697067   18923 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace to be "Ready" ...
	I0816 22:37:44.369712   18929 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (11.708626678s)
	I0816 22:37:44.369752   18929 containerd.go:553] Took 11.708733 seconds t extract the tarball
	I0816 22:37:44.369766   18929 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0816 22:37:44.433232   18929 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:37:44.586357   18929 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:37:44.635654   18929 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0816 22:37:44.682553   18929 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0816 22:37:44.697822   18929 docker.go:153] disabling docker service ...
	I0816 22:37:44.697882   18929 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:37:44.709238   18929 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:37:44.720469   18929 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:37:44.857666   18929 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:37:44.991672   18929 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:37:45.005773   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:37:45.020903   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0816 22:37:45.035818   18929 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:37:45.045388   18929 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:37:45.045444   18929 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:37:45.065836   18929 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:37:45.073649   18929 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:37:45.210250   18929 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:37:45.536389   18929 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0816 22:37:45.536468   18929 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:37:45.543940   18929 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0816 22:37:46.648822   18929 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:37:46.654589   18929 start.go:413] Will wait 60s for crictl version
	I0816 22:37:46.654654   18929 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:37:46.687975   18929 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0816 22:37:46.688041   18929 ssh_runner.go:149] Run: containerd --version
	I0816 22:37:46.717960   18929 ssh_runner.go:149] Run: containerd --version
	I0816 22:37:43.671220   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:45.887022   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:47.896514   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:46.994449   19204 ssh_runner.go:189] Completed: sudo crictl images --output json: (4.010481954s)
	I0816 22:37:46.994588   19204 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.21.3". assuming images are not preloaded.
	I0816 22:37:46.994677   19204 ssh_runner.go:149] Run: which lz4
	I0816 22:37:46.999431   19204 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0816 22:37:47.004309   19204 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0816 22:37:47.004338   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (928970367 bytes)
	I0816 22:37:47.723452   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:49.727582   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:46.750218   18929 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0816 22:37:46.750266   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetIP
	I0816 22:37:46.755631   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:46.756018   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:37:46.756051   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:37:46.756195   18929 ssh_runner.go:149] Run: grep 192.168.105.1	host.minikube.internal$ /etc/hosts
	I0816 22:37:46.760434   18929 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.105.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:37:46.770865   18929 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:37:46.770913   18929 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:37:46.804122   18929 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:37:46.804147   18929 containerd.go:517] Images already preloaded, skipping extraction
	I0816 22:37:46.804200   18929 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:37:46.836132   18929 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:37:46.836154   18929 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:37:46.836213   18929 ssh_runner.go:149] Run: sudo crictl info
	I0816 22:37:46.870224   18929 cni.go:93] Creating CNI manager for ""
	I0816 22:37:46.870256   18929 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:37:46.870269   18929 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:37:46.870282   18929 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.105.129 APIServerPort:8443 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-20210816223333-6986 NodeName:embed-certs-20210816223333-6986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.105.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.105.129 CgroupDriver:cgroupf
s ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:37:46.870401   18929 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.105.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "embed-certs-20210816223333-6986"
	  kubeletExtraArgs:
	    node-ip: 192.168.105.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.105.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:37:46.870482   18929 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=embed-certs-20210816223333-6986 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.105.129 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:embed-certs-20210816223333-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:37:46.870540   18929 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0816 22:37:46.878703   18929 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:37:46.878775   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:37:46.887763   18929 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (548 bytes)
	I0816 22:37:46.900548   18929 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 22:37:46.911899   18929 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2090 bytes)
	I0816 22:37:46.925412   18929 ssh_runner.go:149] Run: grep 192.168.105.129	control-plane.minikube.internal$ /etc/hosts
	I0816 22:37:46.929442   18929 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.105.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:37:46.939989   18929 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986 for IP: 192.168.105.129
	I0816 22:37:46.940054   18929 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:37:46.940073   18929 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:37:46.940143   18929 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/client.key
	I0816 22:37:46.940182   18929 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/apiserver.key.ff3abd74
	I0816 22:37:46.940203   18929 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/proxy-client.key
	I0816 22:37:46.940311   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem (1338 bytes)
	W0816 22:37:46.940364   18929 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
	I0816 22:37:46.940374   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 22:37:46.940398   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:37:46.940419   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:37:46.940453   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1675 bytes)
	I0816 22:37:46.940501   18929 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:37:46.941607   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:37:46.959921   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 22:37:46.977073   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:37:46.995032   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/embed-certs-20210816223333-6986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 22:37:47.016388   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:37:47.036886   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 22:37:47.056736   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:37:47.076945   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 22:37:47.096512   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
	I0816 22:37:47.117888   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
	I0816 22:37:47.137952   18929 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:37:47.159313   18929 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:37:47.173334   18929 ssh_runner.go:149] Run: openssl version
	I0816 22:37:47.179650   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:37:47.191486   18929 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:37:47.196524   18929 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:42 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:37:47.196589   18929 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:37:47.204162   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:37:47.214626   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
	I0816 22:37:47.226391   18929 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6986.pem
	I0816 22:37:47.234494   18929 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:49 /usr/share/ca-certificates/6986.pem
	I0816 22:37:47.234558   18929 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
	I0816 22:37:47.242705   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
	I0816 22:37:47.253305   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
	I0816 22:37:47.263502   18929 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/69862.pem
	I0816 22:37:47.268803   18929 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:49 /usr/share/ca-certificates/69862.pem
	I0816 22:37:47.268865   18929 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
	I0816 22:37:47.274964   18929 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:37:47.283354   18929 kubeadm.go:390] StartCluster: {Name:embed-certs-20210816223333-6986 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3
ClusterName:embed-certs-20210816223333-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.105.129 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] Start
HostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:37:47.283503   18929 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0816 22:37:47.283565   18929 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:37:47.325446   18929 cri.go:76] found id: ""
	I0816 22:37:47.325557   18929 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:37:47.335659   18929 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:37:47.335682   18929 kubeadm.go:600] restartCluster start
	I0816 22:37:47.335733   18929 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:37:47.346292   18929 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:47.347565   18929 kubeconfig.go:117] verify returned: extract IP: "embed-certs-20210816223333-6986" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:37:47.348014   18929 kubeconfig.go:128] "embed-certs-20210816223333-6986" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:37:47.348788   18929 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:37:47.351634   18929 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:37:47.361663   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:47.361718   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:47.374579   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:47.574973   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:47.575059   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:47.589172   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:47.775434   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:47.775507   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:47.788957   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:47.975270   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:47.975360   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:47.989460   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.175680   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.175758   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.191429   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.375697   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.375790   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.386436   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.574665   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.574762   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.589082   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.775443   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.775512   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.791358   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:48.975634   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:48.975720   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:48.988259   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.175437   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.175544   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.190342   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.375596   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.375683   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.389601   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.574808   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.574892   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.585369   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.775000   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.775066   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.787982   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:49.975134   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:49.975231   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:49.986392   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:50.175658   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:50.175750   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:50.188143   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:50.375418   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:50.375514   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:50.387182   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:50.387201   18929 api_server.go:164] Checking apiserver status ...
	I0816 22:37:50.387249   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:37:50.397435   18929 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:37:50.397461   18929 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:37:50.397471   18929 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:37:50.397485   18929 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:37:50.397549   18929 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:37:50.439348   18929 cri.go:76] found id: ""
	I0816 22:37:50.439419   18929 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:37:50.459652   18929 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:37:50.469766   18929 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:37:50.469836   18929 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:37:50.479399   18929 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:37:50.479422   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:50.872420   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:50.387080   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:52.388399   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:53.358602   19204 containerd.go:546] Took 6.359210 seconds to copy over tarball
	I0816 22:37:53.358725   19204 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 22:37:51.735229   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:54.223000   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:52.412541   18929 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.540081052s)
	I0816 22:37:52.412575   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:52.718154   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:52.886875   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:37:53.025017   18929 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:37:53.025085   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:53.540988   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:54.040437   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:54.541392   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:55.040418   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:55.540381   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:54.887899   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:56.229434   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:58.302035   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:00.733041   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:37:56.040801   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:56.540669   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:57.040354   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:57.540386   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:58.040333   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:58.540400   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:59.040772   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:37:59.540444   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:00.041274   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:00.540645   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:00.741760   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:02.887487   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:03.393238   19204 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (10.034485098s)
	I0816 22:38:03.393270   19204 containerd.go:553] Took 10.034612 seconds t extract the tarball
	I0816 22:38:03.393282   19204 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0816 22:38:03.459021   19204 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:38:03.599477   19204 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:38:03.656046   19204 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0816 22:38:03.843112   19204 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0816 22:38:03.858574   19204 docker.go:153] disabling docker service ...
	I0816 22:38:03.858632   19204 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:38:03.872784   19204 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:38:03.886816   19204 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:38:04.029472   19204 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:38:04.164998   19204 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:38:04.176395   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:38:04.190579   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0816 22:38:04.204338   19204 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:38:04.211355   19204 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:38:04.211415   19204 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:38:04.229181   19204 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:38:04.236487   19204 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:38:04.368079   19204 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:38:03.226580   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:05.846484   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:01.040586   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:01.541229   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:02.041014   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:02.540773   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:03.040804   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:03.540654   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:04.041158   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:04.540403   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:05.041212   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:05.540477   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:05.871071   19204 ssh_runner.go:189] Completed: sudo systemctl restart containerd: (1.502953255s)
	I0816 22:38:05.871107   19204 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0816 22:38:05.871162   19204 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:38:05.876672   19204 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0816 22:38:06.981936   19204 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:38:06.987477   19204 start.go:413] Will wait 60s for crictl version
	I0816 22:38:06.987542   19204 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:38:07.019404   19204 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0816 22:38:07.019460   19204 ssh_runner.go:149] Run: containerd --version
	I0816 22:38:07.056241   19204 ssh_runner.go:149] Run: containerd --version
	I0816 22:38:05.841456   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:07.888564   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:07.088137   19204 out.go:177] * Preparing Kubernetes v1.21.3 on containerd 1.4.9 ...
	I0816 22:38:07.088183   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetIP
	I0816 22:38:07.093462   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:38:07.093796   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:38:07.093832   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:38:07.093973   19204 ssh_runner.go:149] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0816 22:38:07.098921   19204 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:38:07.109221   19204 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 22:38:07.109293   19204 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:38:07.143575   19204 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:38:07.143601   19204 containerd.go:517] Images already preloaded, skipping extraction
	I0816 22:38:07.143659   19204 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:38:07.174105   19204 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:38:07.174129   19204 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:38:07.174182   19204 ssh_runner.go:149] Run: sudo crictl info
	I0816 22:38:07.212980   19204 cni.go:93] Creating CNI manager for ""
	I0816 22:38:07.213012   19204 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:38:07.213028   19204 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0816 22:38:07.213043   19204 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.186 APIServerPort:8444 KubernetesVersion:v1.21.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-different-port-20210816223418-6986 NodeName:default-k8s-different-port-20210816223418-6986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.186"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.
50.186 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:38:07.213191   19204 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.186
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "default-k8s-different-port-20210816223418-6986"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.186
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.186"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.21.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:38:07.213279   19204 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.21.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=default-k8s-different-port-20210816223418-6986 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.186 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.21.3 ClusterName:default-k8s-different-port-20210816223418-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0816 22:38:07.213332   19204 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.21.3
	I0816 22:38:07.222054   19204 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:38:07.222139   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:38:07.230063   19204 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (562 bytes)
	I0816 22:38:07.244461   19204 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0816 22:38:07.259892   19204 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0816 22:38:07.274883   19204 ssh_runner.go:149] Run: grep 192.168.50.186	control-plane.minikube.internal$ /etc/hosts
	I0816 22:38:07.280261   19204 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.186	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:38:07.293265   19204 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986 for IP: 192.168.50.186
	I0816 22:38:07.293314   19204 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:38:07.293333   19204 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:38:07.293384   19204 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/client.key
	I0816 22:38:07.293423   19204 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/apiserver.key.c5cc0a12
	I0816 22:38:07.293458   19204 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/proxy-client.key
	I0816 22:38:07.293569   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem (1338 bytes)
	W0816 22:38:07.293608   19204 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
	I0816 22:38:07.293618   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 22:38:07.293643   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:38:07.293668   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:38:07.293692   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1675 bytes)
	I0816 22:38:07.293738   19204 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:38:07.294686   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:38:07.314730   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0816 22:38:07.332358   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:38:07.351920   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/default-k8s-different-port-20210816223418-6986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0816 22:38:07.369849   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:38:07.388099   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 22:38:07.406297   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:38:07.425998   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 22:38:07.443687   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:38:07.460832   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
	I0816 22:38:07.481210   19204 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
	I0816 22:38:07.501717   19204 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:38:07.514903   19204 ssh_runner.go:149] Run: openssl version
	I0816 22:38:07.520949   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:38:07.531264   19204 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:38:07.536846   19204 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:42 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:38:07.536898   19204 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:38:07.543551   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:38:07.553322   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
	I0816 22:38:07.563414   19204 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6986.pem
	I0816 22:38:07.568579   19204 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:49 /usr/share/ca-certificates/6986.pem
	I0816 22:38:07.568631   19204 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
	I0816 22:38:07.574828   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
	I0816 22:38:07.582849   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
	I0816 22:38:07.591254   19204 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/69862.pem
	I0816 22:38:07.595981   19204 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:49 /usr/share/ca-certificates/69862.pem
	I0816 22:38:07.596044   19204 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
	I0816 22:38:07.602206   19204 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:38:07.611191   19204 kubeadm.go:390] StartCluster: {Name:default-k8s-different-port-20210816223418-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.21.3 ClusterName:default-k8s-different-port-20210816223418-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_read
y:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:38:07.611272   19204 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0816 22:38:07.611319   19204 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:38:07.643146   19204 cri.go:76] found id: ""
	I0816 22:38:07.643226   19204 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:38:07.650886   19204 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:38:07.650919   19204 kubeadm.go:600] restartCluster start
	I0816 22:38:07.650971   19204 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:38:07.658653   19204 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:07.659605   19204 kubeconfig.go:117] verify returned: extract IP: "default-k8s-different-port-20210816223418-6986" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:38:07.660046   19204 kubeconfig.go:128] "default-k8s-different-port-20210816223418-6986" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:38:07.661820   19204 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:38:07.664797   19204 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:38:07.672378   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:07.672416   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:07.682197   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:07.882615   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:07.882689   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:07.893628   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.082995   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.083063   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.092764   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.283037   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.283112   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.293325   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.482586   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.482681   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.493502   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.682844   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.682915   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.693201   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.882416   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:08.882491   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:08.892118   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.082359   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.082457   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.092165   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.282385   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.282459   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.291528   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.482860   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.482930   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.493037   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.682335   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.682408   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.691945   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:09.883133   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:09.883193   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:09.892794   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.083140   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.083233   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.092308   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:08.223670   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:10.742112   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:06.041308   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:06.540690   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:07.041155   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:07.540839   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:08.040793   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:08.541292   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:09.041388   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:09.540943   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:10.041377   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:10.541237   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:10.386476   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:12.889815   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:10.282796   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.282889   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.292190   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.482261   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.482330   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.491729   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.683104   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.683186   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.693060   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.693079   19204 api_server.go:164] Checking apiserver status ...
	I0816 22:38:10.693121   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:38:10.701893   19204 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:38:10.701916   19204 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:38:10.701925   19204 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:38:10.701938   19204 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:38:10.701989   19204 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:38:10.740433   19204 cri.go:76] found id: ""
	I0816 22:38:10.740501   19204 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:38:10.756485   19204 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:38:10.765450   19204 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:38:10.765507   19204 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:38:10.772477   19204 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:38:10.772499   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:11.017384   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:12.671111   19204 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.653686174s)
	I0816 22:38:12.671155   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:12.947393   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:13.086256   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:13.215447   19204 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:38:13.215508   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:13.731105   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.231119   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.731093   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:15.231319   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:13.224797   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:15.723341   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:11.040800   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:11.540697   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:12.040673   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:12.541181   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:13.041152   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:13.541025   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.041183   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.541230   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:14.551768   18929 api_server.go:70] duration metric: took 21.526753133s to wait for apiserver process to appear ...
	I0816 22:38:14.551790   18929 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:38:14.551800   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:38:15.386344   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:16.395588   18635 pod_ready.go:92] pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.395621   18635 pod_ready.go:81] duration metric: took 51.044447203s waiting for pod "coredns-fb8b8dccf-qwcrg" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.395634   18635 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.408068   18635 pod_ready.go:92] pod "etcd-old-k8s-version-20210816223154-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.408086   18635 pod_ready.go:81] duration metric: took 12.443476ms waiting for pod "etcd-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.408096   18635 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.414488   18635 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-20210816223154-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.414507   18635 pod_ready.go:81] duration metric: took 6.402316ms waiting for pod "kube-apiserver-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.414521   18635 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.420281   18635 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-20210816223154-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.420300   18635 pod_ready.go:81] duration metric: took 5.769412ms waiting for pod "kube-controller-manager-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.420313   18635 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nvb2s" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.425411   18635 pod_ready.go:92] pod "kube-proxy-nvb2s" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.425430   18635 pod_ready.go:81] duration metric: took 5.109715ms waiting for pod "kube-proxy-nvb2s" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.425440   18635 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.784339   18635 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-20210816223154-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:16.784360   18635 pod_ready.go:81] duration metric: took 358.911908ms waiting for pod "kube-scheduler-old-k8s-version-20210816223154-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:16.784371   18635 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:18.553150   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:38:18.553194   18929 api_server.go:101] status: https://192.168.105.129:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:38:19.053887   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:38:19.071151   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:38:19.071179   18929 api_server.go:101] status: https://192.168.105.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:38:19.553619   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:38:19.561382   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:38:19.561406   18929 api_server.go:101] status: https://192.168.105.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:38:20.053341   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:38:20.061527   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 200:
	ok
	I0816 22:38:20.069537   18929 api_server.go:139] control plane version: v1.21.3
	I0816 22:38:20.069560   18929 api_server.go:129] duration metric: took 5.517764917s to wait for apiserver health ...
	I0816 22:38:20.069572   18929 cni.go:93] Creating CNI manager for ""
	I0816 22:38:20.069581   18929 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:38:15.731207   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:16.231247   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:16.731268   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:17.230730   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:17.730956   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:18.231458   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:18.730950   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:19.230879   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:19.730819   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:20.230563   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:38:20.243853   19204 api_server.go:70] duration metric: took 7.028407985s to wait for apiserver process to appear ...
	I0816 22:38:20.243876   19204 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:38:20.243887   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:18.225200   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:20.243220   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:20.071659   18929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:38:20.071738   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:38:20.084719   18929 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:38:20.113939   18929 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:38:20.132494   18929 system_pods.go:59] 8 kube-system pods found
	I0816 22:38:20.132598   18929 system_pods.go:61] "coredns-558bd4d5db-jq6bb" [c088e8ae-638c-449f-b206-10b016f707f4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:38:20.132622   18929 system_pods.go:61] "etcd-embed-certs-20210816223333-6986" [350ff095-f45d-4c87-a10a-cbb9a0cc4358] Running
	I0816 22:38:20.132654   18929 system_pods.go:61] "kube-apiserver-embed-certs-20210816223333-6986" [7ee444e9-f198-4d9b-985e-b190a2e5e369] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0816 22:38:20.132667   18929 system_pods.go:61] "kube-controller-manager-embed-certs-20210816223333-6986" [c71ecc69-d617-48d3-a162-46d27aedd0a9] Running
	I0816 22:38:20.132676   18929 system_pods.go:61] "kube-proxy-8h6xz" [7cbdd516-13c5-469b-8e60-7dc0babb699a] Running
	I0816 22:38:20.132688   18929 system_pods.go:61] "kube-scheduler-embed-certs-20210816223333-6986" [4ebf165e-13c3-4f42-a75f-4301ea2f6c78] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0816 22:38:20.132698   18929 system_pods.go:61] "metrics-server-7c784ccb57-9xpsr" [6b6283cf-0668-48a4-9f21-61cb5723f0b8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:38:20.132704   18929 system_pods.go:61] "storage-provisioner" [7893460e-43c2-4606-8b56-c2ed9ac764bd] Running
	I0816 22:38:20.132712   18929 system_pods.go:74] duration metric: took 18.749758ms to wait for pod list to return data ...
	I0816 22:38:20.132721   18929 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:38:20.138564   18929 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:38:20.138614   18929 node_conditions.go:123] node cpu capacity is 2
	I0816 22:38:20.138632   18929 node_conditions.go:105] duration metric: took 5.904026ms to run NodePressure ...
	I0816 22:38:20.138651   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:20.830223   18929 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:38:20.835364   18929 kubeadm.go:746] kubelet initialised
	I0816 22:38:20.835384   18929 kubeadm.go:747] duration metric: took 5.139864ms waiting for restarted kubelet to initialise ...
	I0816 22:38:20.835392   18929 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:38:20.841354   18929 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:19.191797   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:21.192936   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:25.244953   19204 api_server.go:255] stopped: https://192.168.50.186:8444/healthz: Get "https://192.168.50.186:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 22:38:22.723414   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:25.223163   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:22.860677   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:24.863916   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:23.690499   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:25.690995   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:27.691820   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:25.746028   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:27.721976   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:29.722107   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:27.361030   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:29.361218   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:30.190894   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:32.192100   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:30.746969   19204 api_server.go:255] stopped: https://192.168.50.186:8444/healthz: Get "https://192.168.50.186:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 22:38:31.245148   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:32.224115   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:34.723153   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:31.859919   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:33.863770   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:34.691552   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:36.693980   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:36.246218   19204 api_server.go:255] stopped: https://192.168.50.186:8444/healthz: Get "https://192.168.50.186:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 22:38:36.745853   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:37.223369   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:39.239225   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:36.360668   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:38.361218   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:40.871372   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:41.344967   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:38:41.344991   19204 api_server.go:101] status: https://192.168.50.186:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:38:41.745061   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:41.754168   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:38:41.754195   19204 api_server.go:101] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:38:42.245898   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:42.258458   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:38:42.258509   19204 api_server.go:101] status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:38:42.745610   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:38:42.756658   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 200:
	ok
	I0816 22:38:42.770293   19204 api_server.go:139] control plane version: v1.21.3
	I0816 22:38:42.770321   19204 api_server.go:129] duration metric: took 22.526438535s to wait for apiserver health ...
	I0816 22:38:42.770332   19204 cni.go:93] Creating CNI manager for ""
	I0816 22:38:42.770339   19204 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:38:39.192176   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:41.198006   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:42.772377   19204 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:38:42.772434   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:38:42.788298   19204 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:38:42.809709   19204 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:38:42.824805   19204 system_pods.go:59] 8 kube-system pods found
	I0816 22:38:42.824843   19204 system_pods.go:61] "coredns-558bd4d5db-ssfkf" [eb30728b-0eae-41d8-90bc-d8de8c6b4caa] Running
	I0816 22:38:42.824857   19204 system_pods.go:61] "etcd-default-k8s-different-port-20210816223418-6986" [825a27d4-d8dc-4dbe-a724-ac2e59508c5b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0816 22:38:42.824865   19204 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210816223418-6986" [a3383733-5a20-4b5a-aeab-df3e61e37d94] Running
	I0816 22:38:42.824882   19204 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210816223418-6986" [42f433b1-271b-41a6-96a0-ab85fe6ba28e] Pending / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 22:38:42.824896   19204 system_pods.go:61] "kube-proxy-psg4t" [98ca6629-d521-445d-99c2-b7e7ddf3b973] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0816 22:38:42.824905   19204 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210816223418-6986" [bef50322-5dc7-4680-b867-e17eb23298a8] Running
	I0816 22:38:42.824919   19204 system_pods.go:61] "metrics-server-7c784ccb57-rmrr6" [325f4892-3ae2-4a08-bc13-22c74c15c362] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:38:42.824929   19204 system_pods.go:61] "storage-provisioner" [89aadc6c-b5b0-47eb-b6e0-0f5fb78b1689] Running
	I0816 22:38:42.824936   19204 system_pods.go:74] duration metric: took 15.209253ms to wait for pod list to return data ...
	I0816 22:38:42.824947   19204 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:38:42.835095   19204 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:38:42.835144   19204 node_conditions.go:123] node cpu capacity is 2
	I0816 22:38:42.835160   19204 node_conditions.go:105] duration metric: took 10.206913ms to run NodePressure ...
	I0816 22:38:42.835178   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:38:43.431532   19204 kubeadm.go:731] waiting for restarted kubelet to initialise ...
	I0816 22:38:43.443469   19204 kubeadm.go:746] kubelet initialised
	I0816 22:38:43.443543   19204 kubeadm.go:747] duration metric: took 11.973692ms waiting for restarted kubelet to initialise ...
	I0816 22:38:43.443567   19204 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:38:43.467119   19204 pod_ready.go:78] waiting up to 4m0s for pod "coredns-558bd4d5db-ssfkf" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:43.487197   19204 pod_ready.go:92] pod "coredns-558bd4d5db-ssfkf" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:43.487224   19204 pod_ready.go:81] duration metric: took 20.062907ms waiting for pod "coredns-558bd4d5db-ssfkf" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:43.487236   19204 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:41.723036   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:43.727234   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:42.883394   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:45.360217   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:43.692394   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:46.195001   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:45.513670   19204 pod_ready.go:102] pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:47.520170   19204 pod_ready.go:102] pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:50.012608   19204 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:50.012643   19204 pod_ready.go:81] duration metric: took 6.525398312s waiting for pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:50.012653   19204 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:50.018616   19204 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:50.018632   19204 pod_ready.go:81] duration metric: took 5.971078ms waiting for pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:50.018641   19204 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:46.223793   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:48.231527   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:50.721902   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:47.864929   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:50.359955   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:48.690708   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:50.691511   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:53.191133   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:52.030327   19204 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:54.530276   19204 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:52.723113   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:54.730785   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:52.865142   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:55.362902   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:55.692797   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:58.193231   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:56.537583   19204 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:59.032998   19204 pod_ready.go:102] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:59.531144   19204 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:38:59.531179   19204 pod_ready.go:81] duration metric: took 9.512530001s waiting for pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:59.531194   19204 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-psg4t" in "kube-system" namespace to be "Ready" ...
	I0816 22:38:57.227423   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:59.722421   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:38:57.860847   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:00.383065   18929 pod_ready.go:102] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:00.194401   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:02.693032   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:01.045104   19204 pod_ready.go:92] pod "kube-proxy-psg4t" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.045136   19204 pod_ready.go:81] duration metric: took 1.513934389s waiting for pod "kube-proxy-psg4t" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.045162   19204 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:03.065559   19204 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:02.225371   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:04.231432   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:01.360648   18929 pod_ready.go:92] pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.360679   18929 pod_ready.go:81] duration metric: took 40.519291305s waiting for pod "coredns-558bd4d5db-jq6bb" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.360692   18929 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.377816   18929 pod_ready.go:92] pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.377835   18929 pod_ready.go:81] duration metric: took 17.135128ms waiting for pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.377844   18929 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.384900   18929 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.384919   18929 pod_ready.go:81] duration metric: took 7.067915ms waiting for pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.384928   18929 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.391593   18929 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.391615   18929 pod_ready.go:81] duration metric: took 6.679953ms waiting for pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.391628   18929 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8h6xz" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.397839   18929 pod_ready.go:92] pod "kube-proxy-8h6xz" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.397859   18929 pod_ready.go:81] duration metric: took 6.224125ms waiting for pod "kube-proxy-8h6xz" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.397870   18929 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.757203   18929 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:01.757231   18929 pod_ready.go:81] duration metric: took 359.352415ms waiting for pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:01.757245   18929 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:04.166965   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:05.190883   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:07.691413   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:05.560049   19204 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:07.563106   19204 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:10.058732   19204 pod_ready.go:102] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:06.241105   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:08.721067   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:10.729982   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:06.173818   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:08.671197   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:10.190249   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:12.190937   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:11.058551   19204 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:39:11.058589   19204 pod_ready.go:81] duration metric: took 10.013415785s waiting for pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:11.058602   19204 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace to be "Ready" ...
	I0816 22:39:13.079741   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:13.222923   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:15.223480   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:11.169568   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:13.668888   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:15.675907   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:14.691328   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:17.193097   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:15.574185   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:18.080714   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:17.721688   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:20.223136   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:18.166872   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:20.167888   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:19.690743   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:21.695097   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:20.573176   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:22.575373   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:25.080599   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:22.721982   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:24.723334   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:22.674385   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:25.168465   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:24.191127   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:26.692188   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:27.576508   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:30.077538   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:26.725975   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:29.222550   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:27.667108   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:29.672819   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:29.190076   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:31.191096   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:32.573255   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:34.574846   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:31.222778   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:33.721695   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:35.722989   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:32.167222   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:34.168925   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:33.691602   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:35.693194   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:38.192247   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:36.575818   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:39.074280   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:37.724177   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:40.222061   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:36.667227   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:38.667709   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:40.193105   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:42.691214   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:41.577819   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:44.074371   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:42.222318   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:44.223676   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:41.169382   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:43.169678   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:45.172140   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:44.692521   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:46.693152   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:46.080520   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:48.574175   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:46.226822   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:48.723407   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:50.723464   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:47.669324   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:50.168305   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:49.191566   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:51.192223   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:50.574493   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:53.072736   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:55.075288   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:53.226025   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:55.722244   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:52.667088   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:54.668826   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:53.690899   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:55.692317   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:58.190689   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:57.076942   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:59.573822   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:58.225641   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:00.721925   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:57.165321   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:39:59.171812   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:00.194014   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:02.691574   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:01.573901   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:04.073928   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:02.724585   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:04.724644   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:01.175154   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:03.669857   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:05.191832   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:07.693327   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:06.576903   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:09.078443   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:07.222275   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:09.224637   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:06.167190   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:08.168551   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:10.668660   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:10.191769   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:12.693193   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:11.574665   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:13.576508   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:11.224838   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:13.721159   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:15.727256   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:12.670244   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:15.167885   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:15.194325   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:17.692108   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:16.072818   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:18.078890   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:17.729812   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:20.226491   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:17.177047   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:19.217251   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:20.192280   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:22.693518   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:20.574552   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:22.574777   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:24.577476   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:22.727579   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:24.728352   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:21.668537   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:24.167106   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:25.191135   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:27.191723   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:27.075236   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:29.574554   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:27.223601   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:29.225348   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:26.172206   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:28.666902   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:30.667512   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:29.693817   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:32.192170   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:32.073947   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:34.076857   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:31.806875   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:34.222064   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:32.670097   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:35.167425   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:34.193574   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:36.692421   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:36.575233   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:39.074418   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:36.223456   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:38.224575   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:40.721673   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:37.168398   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:39.172793   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:39.196016   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:41.690324   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:41.075116   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:43.576123   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:42.724088   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:44.724675   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:41.674073   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:44.170704   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:43.693077   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:45.693362   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:48.190525   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:45.576264   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:48.077395   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:46.729980   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:49.221967   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:46.171454   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:48.665714   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:50.668334   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:50.193564   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:52.691234   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:50.572686   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:52.574382   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:55.074999   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:51.222668   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:53.226343   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:55.725259   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:52.673171   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:55.168585   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:54.692513   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:57.191126   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:57.079875   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:59.573017   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:58.221527   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:00.227502   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:57.671255   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:00.168665   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:40:59.691534   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:01.693478   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:01.582883   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:04.072426   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:02.722966   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:04.727296   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:02.173240   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:04.665480   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:04.191798   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:06.691447   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:06.073825   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:08.074664   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:10.075325   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:07.223517   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:09.721892   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:06.667330   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:08.671220   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:09.191192   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:11.691389   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:12.076107   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:14.575585   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:11.725914   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:13.730699   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:11.169385   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:13.673312   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:14.191060   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:16.192184   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:16.576492   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:19.076650   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:16.225569   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:18.724188   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:20.724698   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:16.165664   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:18.166105   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:20.166339   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:18.691871   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:20.691922   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:23.191074   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:21.574173   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:24.075930   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:23.223119   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:25.223978   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:22.173729   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:24.666435   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:25.692064   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:27.693165   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:26.574028   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:28.577627   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:27.723162   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:30.225428   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:26.666698   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:28.667290   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:30.669320   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:30.191236   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:32.194129   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:31.078550   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:33.574708   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:32.272795   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:34.721477   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:32.670349   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:35.166861   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:34.691270   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:36.693071   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:36.073462   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:38.075367   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:36.731674   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:39.226976   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:37.170645   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:39.724821   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:39.190190   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:41.192605   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:43.194313   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:40.572815   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:43.074323   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:41.728026   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:44.222098   18923 pod_ready.go:102] pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:45.713684   18923 pod_ready.go:81] duration metric: took 4m0.016600156s waiting for pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace to be "Ready" ...
	E0816 22:41:45.713707   18923 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-44llk" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:41:45.713739   18923 pod_ready.go:38] duration metric: took 4m11.701504099s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:41:45.713769   18923 kubeadm.go:604] restartCluster took 4m33.579475629s
	W0816 22:41:45.713944   18923 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:41:45.714027   18923 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0816 22:41:42.167746   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:44.671010   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:45.690207   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:47.696181   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:45.573577   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:47.577169   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:50.074120   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:49.532312   18923 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.817885262s)
	I0816 22:41:49.532396   18923 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:41:49.547377   18923 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:41:49.547460   18923 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:41:49.586205   18923 cri.go:76] found id: "c265ff52803b42d9b0ff6036d0ca9fe4b655c9875474baa9a0a3f7255d230e17"
	I0816 22:41:49.586231   18923 cri.go:76] found id: ""
	W0816 22:41:49.586237   18923 kubeadm.go:840] found 1 kube-system containers to stop
	I0816 22:41:49.586243   18923 cri.go:221] Stopping containers: [c265ff52803b42d9b0ff6036d0ca9fe4b655c9875474baa9a0a3f7255d230e17]
	I0816 22:41:49.586286   18923 ssh_runner.go:149] Run: which crictl
	I0816 22:41:49.590992   18923 ssh_runner.go:149] Run: sudo /bin/crictl stop c265ff52803b42d9b0ff6036d0ca9fe4b655c9875474baa9a0a3f7255d230e17
	I0816 22:41:49.626874   18923 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:41:49.635033   18923 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:41:49.643072   18923 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:41:49.643114   18923 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0816 22:41:46.671498   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:49.167852   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:50.191302   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:52.194912   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:52.573508   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:54.574289   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:51.170118   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:53.672114   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:54.691353   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:56.691660   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:57.075408   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:59.575201   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:56.166934   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:58.175241   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:00.668070   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:41:58.692572   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:00.693110   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:02.693563   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:02.073370   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:04.074072   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:03.171450   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:05.675018   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:05.192214   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:07.692700   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:08.829041   18923 out.go:204]   - Generating certificates and keys ...
	I0816 22:42:08.831708   18923 out.go:204]   - Booting up control plane ...
	I0816 22:42:08.834200   18923 out.go:204]   - Configuring RBAC rules ...
	I0816 22:42:08.836416   18923 cni.go:93] Creating CNI manager for ""
	I0816 22:42:08.836433   18923 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:42:06.578343   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:08.578554   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:08.838017   18923 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:42:08.838073   18923 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:42:08.846501   18923 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:42:08.869457   18923 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:42:08.869501   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:08.869527   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=no-preload-20210816223156-6986 minikube.k8s.io/updated_at=2021_08_16T22_42_08_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:09.240543   18923 ops.go:34] apiserver oom_adj: -16
	I0816 22:42:09.240662   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:09.839173   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:10.338906   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:10.839126   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:08.175656   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:10.670201   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:09.693093   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:12.193949   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:11.076847   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:13.572667   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:11.339623   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:11.839145   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:12.339335   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:12.839352   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:13.339016   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:13.838633   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:14.339209   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:14.839574   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:15.338605   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:15.838986   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:13.166828   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:15.170558   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:14.195434   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:16.691097   18635 pod_ready.go:102] pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:17.183312   18635 pod_ready.go:81] duration metric: took 4m0.398928004s waiting for pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace to be "Ready" ...
	E0816 22:42:17.183337   18635 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-8546d8b77b-gl6jr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:42:17.183357   18635 pod_ready.go:38] duration metric: took 4m51.857756569s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:42:17.183387   18635 kubeadm.go:604] restartCluster took 5m19.62322748s
	W0816 22:42:17.183554   18635 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:42:17.183589   18635 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0816 22:42:15.573445   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:17.576213   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:19.578780   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:16.339618   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:16.839112   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:17.338889   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:17.838606   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:18.339509   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:18.839537   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:19.338632   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:19.839240   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:20.339527   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:20.838664   18923 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:17.671899   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:19.672963   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:20.586991   18635 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.403367986s)
	I0816 22:42:20.587083   18635 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:42:20.603414   18635 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:42:20.603499   18635 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:42:20.644469   18635 cri.go:76] found id: ""
	I0816 22:42:20.644547   18635 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:42:20.654179   18635 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:42:20.664747   18635 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:42:20.664790   18635 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.14.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap"
	I0816 22:42:21.326940   18635 out.go:204]   - Generating certificates and keys ...
	I0816 22:42:21.189008   18923 kubeadm.go:985] duration metric: took 12.319564991s to wait for elevateKubeSystemPrivileges.
	I0816 22:42:21.189042   18923 kubeadm.go:392] StartCluster complete in 5m9.132482632s
	I0816 22:42:21.189068   18923 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:42:21.189186   18923 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:42:21.191084   18923 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0816 22:42:21.253468   18923 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0816 22:42:22.263255   18923 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "no-preload-20210816223156-6986" rescaled to 1
	I0816 22:42:22.263323   18923 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.116.66 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0816 22:42:22.265111   18923 out.go:177] * Verifying Kubernetes components...
	I0816 22:42:22.265169   18923 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:42:22.263389   18923 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:42:22.263413   18923 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:42:22.265318   18923 addons.go:59] Setting storage-provisioner=true in profile "no-preload-20210816223156-6986"
	I0816 22:42:22.265343   18923 addons.go:135] Setting addon storage-provisioner=true in "no-preload-20210816223156-6986"
	I0816 22:42:22.265343   18923 addons.go:59] Setting dashboard=true in profile "no-preload-20210816223156-6986"
	W0816 22:42:22.265352   18923 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:42:22.265365   18923 addons.go:135] Setting addon dashboard=true in "no-preload-20210816223156-6986"
	W0816 22:42:22.265384   18923 addons.go:147] addon dashboard should already be in state true
	I0816 22:42:22.263563   18923 config.go:177] Loaded profile config "no-preload-20210816223156-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0816 22:42:22.265401   18923 addons.go:59] Setting metrics-server=true in profile "no-preload-20210816223156-6986"
	I0816 22:42:22.265412   18923 host.go:66] Checking if "no-preload-20210816223156-6986" exists ...
	I0816 22:42:22.265427   18923 addons.go:135] Setting addon metrics-server=true in "no-preload-20210816223156-6986"
	W0816 22:42:22.265437   18923 addons.go:147] addon metrics-server should already be in state true
	I0816 22:42:22.265384   18923 addons.go:59] Setting default-storageclass=true in profile "no-preload-20210816223156-6986"
	I0816 22:42:22.265462   18923 host.go:66] Checking if "no-preload-20210816223156-6986" exists ...
	I0816 22:42:22.265390   18923 host.go:66] Checking if "no-preload-20210816223156-6986" exists ...
	I0816 22:42:22.265461   18923 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-20210816223156-6986"
	I0816 22:42:22.265940   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.265944   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.265957   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.265942   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.265975   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.265986   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.266089   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.266123   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.281969   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45777
	I0816 22:42:22.282708   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.282877   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40837
	I0816 22:42:22.283046   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42349
	I0816 22:42:22.283302   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.283322   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.283427   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.283650   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.283893   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.284078   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.284092   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.284330   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.284347   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.284461   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.284627   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.284665   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.284970   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.285003   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.285116   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.285285   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.293128   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38523
	I0816 22:42:22.293558   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.294059   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.294082   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.294429   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.294987   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.295053   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.298092   18923 addons.go:135] Setting addon default-storageclass=true in "no-preload-20210816223156-6986"
	W0816 22:42:22.298118   18923 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:42:22.298147   18923 host.go:66] Checking if "no-preload-20210816223156-6986" exists ...
	I0816 22:42:22.298560   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.298601   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.302416   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44833
	I0816 22:42:22.302994   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.303562   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.303593   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.304002   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.304209   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.305854   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34121
	I0816 22:42:22.306273   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.307236   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.307263   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.307631   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.307783   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.308340   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .DriverName
	I0816 22:42:22.310958   18923 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:42:22.311023   18923 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:42:22.311044   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:42:22.311064   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHHostname
	I0816 22:42:22.311377   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .DriverName
	I0816 22:42:22.313216   18923 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:42:22.311947   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45779
	I0816 22:42:22.313321   18923 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:42:22.313337   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:42:22.312981   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38669
	I0816 22:42:22.313354   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHHostname
	I0816 22:42:22.313674   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.313848   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.314124   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.314144   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.314391   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.314413   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.314493   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.314698   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.314875   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.315544   18923 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:22.315591   18923 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:22.319514   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .DriverName
	I0816 22:42:22.319736   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.321507   18923 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:42:22.320102   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:66:08", ip: ""} in network mk-no-preload-20210816223156-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:37:07 +0000 UTC Type:0 Mac:52:54:00:1e:66:08 Iaid: IPaddr:192.168.116.66 Prefix:24 Hostname:no-preload-20210816223156-6986 Clientid:01:52:54:00:1e:66:08}
	I0816 22:42:22.320309   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHPort
	I0816 22:42:22.320694   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.321331   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHPort
	I0816 22:42:22.321669   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:66:08", ip: ""} in network mk-no-preload-20210816223156-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:37:07 +0000 UTC Type:0 Mac:52:54:00:1e:66:08 Iaid: IPaddr:192.168.116.66 Prefix:24 Hostname:no-preload-20210816223156-6986 Clientid:01:52:54:00:1e:66:08}
	I0816 22:42:22.321594   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined IP address 192.168.116.66 and MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.180281   18635 out.go:204]   - Booting up control plane ...
	I0816 22:42:22.073806   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:24.079495   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:22.323189   18923 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:42:22.321708   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined IP address 192.168.116.66 and MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.321766   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHKeyPath
	I0816 22:42:22.321808   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHKeyPath
	I0816 22:42:22.323243   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:42:22.323341   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:42:22.323363   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHHostname
	I0816 22:42:22.323468   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHUsername
	I0816 22:42:22.323473   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHUsername
	I0816 22:42:22.323663   18923 sshutil.go:53] new ssh client: &{IP:192.168.116.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816223156-6986/id_rsa Username:docker}
	I0816 22:42:22.323678   18923 sshutil.go:53] new ssh client: &{IP:192.168.116.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816223156-6986/id_rsa Username:docker}
	I0816 22:42:22.328724   18923 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45831
	I0816 22:42:22.329130   18923 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:22.329535   18923 main.go:130] libmachine: Using API Version  1
	I0816 22:42:22.329554   18923 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:22.329851   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.329938   18923 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:22.330124   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetState
	I0816 22:42:22.330329   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:66:08", ip: ""} in network mk-no-preload-20210816223156-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:37:07 +0000 UTC Type:0 Mac:52:54:00:1e:66:08 Iaid: IPaddr:192.168.116.66 Prefix:24 Hostname:no-preload-20210816223156-6986 Clientid:01:52:54:00:1e:66:08}
	I0816 22:42:22.330363   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined IP address 192.168.116.66 and MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.330478   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHPort
	I0816 22:42:22.330620   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHKeyPath
	I0816 22:42:22.330750   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHUsername
	I0816 22:42:22.330873   18923 sshutil.go:53] new ssh client: &{IP:192.168.116.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816223156-6986/id_rsa Username:docker}
	I0816 22:42:22.333001   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .DriverName
	I0816 22:42:22.333246   18923 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:42:22.333262   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:42:22.333279   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHHostname
	I0816 22:42:22.338603   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.339024   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1e:66:08", ip: ""} in network mk-no-preload-20210816223156-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:37:07 +0000 UTC Type:0 Mac:52:54:00:1e:66:08 Iaid: IPaddr:192.168.116.66 Prefix:24 Hostname:no-preload-20210816223156-6986 Clientid:01:52:54:00:1e:66:08}
	I0816 22:42:22.339055   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | domain no-preload-20210816223156-6986 has defined IP address 192.168.116.66 and MAC address 52:54:00:1e:66:08 in network mk-no-preload-20210816223156-6986
	I0816 22:42:22.339242   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHPort
	I0816 22:42:22.339393   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHKeyPath
	I0816 22:42:22.339570   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .GetSSHUsername
	I0816 22:42:22.339731   18923 sshutil.go:53] new ssh client: &{IP:192.168.116.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/no-preload-20210816223156-6986/id_rsa Username:docker}
	I0816 22:42:22.671302   18923 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:42:22.671331   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:42:22.674471   18923 node_ready.go:35] waiting up to 6m0s for node "no-preload-20210816223156-6986" to be "Ready" ...
	I0816 22:42:22.674764   18923 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.116.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:42:22.680985   18923 node_ready.go:49] node "no-preload-20210816223156-6986" has status "Ready":"True"
	I0816 22:42:22.681006   18923 node_ready.go:38] duration metric: took 6.219914ms waiting for node "no-preload-20210816223156-6986" to be "Ready" ...
	I0816 22:42:22.681017   18923 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:42:22.690584   18923 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:22.758871   18923 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:42:22.908102   18923 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:42:22.908132   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:42:23.011738   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:42:23.011768   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:42:23.048103   18923 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:42:23.113442   18923 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:42:23.113472   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:42:23.311431   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:42:23.311461   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:42:23.413450   18923 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:42:23.601523   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:42:23.601554   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:42:23.797882   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:42:23.797908   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:42:23.957080   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:42:23.957109   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:42:24.496102   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:42:24.496134   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:42:24.715720   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:42:24.715807   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:42:24.725833   18923 pod_ready.go:102] pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:24.991135   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:42:24.991165   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:42:25.061259   18923 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.116.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.386242884s)
	I0816 22:42:25.061297   18923 start.go:728] {"host.minikube.internal": 192.168.116.1} host record injected into CoreDNS
	I0816 22:42:25.085411   18923 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:42:25.085463   18923 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:42:25.132722   18923 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:42:25.402705   18923 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.64379015s)
	I0816 22:42:25.402772   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.402790   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.403123   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.403222   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:25.403245   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.403270   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.403197   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:25.403597   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:25.404574   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.404594   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:25.404607   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.404616   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.404837   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.404878   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:25.431424   18923 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.383276848s)
	I0816 22:42:25.431470   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.431484   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.431767   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.431781   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:25.431788   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:25.431799   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:25.431810   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:25.432092   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:25.432111   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:22.168138   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:24.174050   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:26.094382   18923 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.680878058s)
	I0816 22:42:26.094446   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:26.094474   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:26.094773   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:26.094830   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:26.094859   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:26.094885   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:26.094774   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:26.095167   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:26.095182   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:26.095193   18923 addons.go:313] Verifying addon metrics-server=true in "no-preload-20210816223156-6986"
	I0816 22:42:26.855647   18923 pod_ready.go:102] pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:27.149522   18923 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.016735128s)
	I0816 22:42:27.149590   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:27.149605   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:27.149955   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) DBG | Closing plugin on server side
	I0816 22:42:27.150053   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:27.150073   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:27.150083   18923 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:27.150094   18923 main.go:130] libmachine: (no-preload-20210816223156-6986) Calling .Close
	I0816 22:42:27.150330   18923 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:27.150347   18923 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:26.575022   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:28.575534   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:27.153345   18923 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0816 22:42:27.153375   18923 addons.go:344] enableAddons completed in 4.88997344s
	I0816 22:42:28.729990   18923 pod_ready.go:92] pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:28.730033   18923 pod_ready.go:81] duration metric: took 6.039413295s waiting for pod "coredns-78fcd69978-9rlk6" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:28.730047   18923 pod_ready.go:78] waiting up to 6m0s for pod "coredns-78fcd69978-vf8l9" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.743600   18923 pod_ready.go:97] error getting pod "coredns-78fcd69978-vf8l9" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-vf8l9" not found
	I0816 22:42:30.743642   18923 pod_ready.go:81] duration metric: took 2.013586217s waiting for pod "coredns-78fcd69978-vf8l9" in "kube-system" namespace to be "Ready" ...
	E0816 22:42:30.743656   18923 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-78fcd69978-vf8l9" in "kube-system" namespace (skipping!): pods "coredns-78fcd69978-vf8l9" not found
	I0816 22:42:30.743666   18923 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.757721   18923 pod_ready.go:92] pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.757745   18923 pod_ready.go:81] duration metric: took 14.064042ms waiting for pod "etcd-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.757758   18923 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.767053   18923 pod_ready.go:92] pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.767087   18923 pod_ready.go:81] duration metric: took 9.317684ms waiting for pod "kube-apiserver-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.767102   18923 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.777595   18923 pod_ready.go:92] pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.777619   18923 pod_ready.go:81] duration metric: took 10.507966ms waiting for pod "kube-controller-manager-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.777632   18923 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jhqbx" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.790967   18923 pod_ready.go:92] pod "kube-proxy-jhqbx" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.790991   18923 pod_ready.go:81] duration metric: took 13.350231ms waiting for pod "kube-proxy-jhqbx" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.791003   18923 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:26.174733   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:28.675892   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:30.951607   18923 pod_ready.go:92] pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:42:30.951630   18923 pod_ready.go:81] duration metric: took 160.617881ms waiting for pod "kube-scheduler-no-preload-20210816223156-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:30.951642   18923 pod_ready.go:38] duration metric: took 8.270610362s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:42:30.951663   18923 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:42:30.951723   18923 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:42:30.970609   18923 api_server.go:70] duration metric: took 8.707242252s to wait for apiserver process to appear ...
	I0816 22:42:30.970637   18923 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:42:30.970650   18923 api_server.go:239] Checking apiserver healthz at https://192.168.116.66:8443/healthz ...
	I0816 22:42:30.979459   18923 api_server.go:265] https://192.168.116.66:8443/healthz returned 200:
	ok
	I0816 22:42:30.980742   18923 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:42:30.980766   18923 api_server.go:129] duration metric: took 10.122149ms to wait for apiserver health ...
	I0816 22:42:30.980777   18923 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:42:31.156911   18923 system_pods.go:59] 8 kube-system pods found
	I0816 22:42:31.156942   18923 system_pods.go:61] "coredns-78fcd69978-9rlk6" [83d3b042-5692-4c44-b6f2-65020120666e] Running
	I0816 22:42:31.156949   18923 system_pods.go:61] "etcd-no-preload-20210816223156-6986" [4ea832d0-9ae2-4d9e-9aed-aa581e1ef4cf] Running
	I0816 22:42:31.156956   18923 system_pods.go:61] "kube-apiserver-no-preload-20210816223156-6986" [048331fe-fed3-4cca-a2c6-3b377e26647b] Running
	I0816 22:42:31.156965   18923 system_pods.go:61] "kube-controller-manager-no-preload-20210816223156-6986" [704424de-e6c8-4c70-bd80-c9b68c4c4b57] Running
	I0816 22:42:31.156971   18923 system_pods.go:61] "kube-proxy-jhqbx" [edacd358-46da-4db4-a8db-098f6edefb76] Running
	I0816 22:42:31.156977   18923 system_pods.go:61] "kube-scheduler-no-preload-20210816223156-6986" [7674933f-6de0-4090-867c-c082293d963c] Running
	I0816 22:42:31.156988   18923 system_pods.go:61] "metrics-server-7c784ccb57-dfjww" [7a744b20-6d7f-4001-a322-7e5615cbf15f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:42:31.156998   18923 system_pods.go:61] "storage-provisioner" [c8a5ee3a-f5df-4eeb-92b1-443ecef04ea2] Running
	I0816 22:42:31.157005   18923 system_pods.go:74] duration metric: took 176.222595ms to wait for pod list to return data ...
	I0816 22:42:31.157016   18923 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:42:31.345286   18923 default_sa.go:45] found service account: "default"
	I0816 22:42:31.345311   18923 default_sa.go:55] duration metric: took 188.289571ms for default service account to be created ...
	I0816 22:42:31.345319   18923 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:42:31.555450   18923 system_pods.go:86] 8 kube-system pods found
	I0816 22:42:31.555481   18923 system_pods.go:89] "coredns-78fcd69978-9rlk6" [83d3b042-5692-4c44-b6f2-65020120666e] Running
	I0816 22:42:31.555490   18923 system_pods.go:89] "etcd-no-preload-20210816223156-6986" [4ea832d0-9ae2-4d9e-9aed-aa581e1ef4cf] Running
	I0816 22:42:31.555497   18923 system_pods.go:89] "kube-apiserver-no-preload-20210816223156-6986" [048331fe-fed3-4cca-a2c6-3b377e26647b] Running
	I0816 22:42:31.555503   18923 system_pods.go:89] "kube-controller-manager-no-preload-20210816223156-6986" [704424de-e6c8-4c70-bd80-c9b68c4c4b57] Running
	I0816 22:42:31.555509   18923 system_pods.go:89] "kube-proxy-jhqbx" [edacd358-46da-4db4-a8db-098f6edefb76] Running
	I0816 22:42:31.555515   18923 system_pods.go:89] "kube-scheduler-no-preload-20210816223156-6986" [7674933f-6de0-4090-867c-c082293d963c] Running
	I0816 22:42:31.555529   18923 system_pods.go:89] "metrics-server-7c784ccb57-dfjww" [7a744b20-6d7f-4001-a322-7e5615cbf15f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:42:31.555541   18923 system_pods.go:89] "storage-provisioner" [c8a5ee3a-f5df-4eeb-92b1-443ecef04ea2] Running
	I0816 22:42:31.555553   18923 system_pods.go:126] duration metric: took 210.228822ms to wait for k8s-apps to be running ...
	I0816 22:42:31.555566   18923 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:42:31.555615   18923 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:42:31.581892   18923 system_svc.go:56] duration metric: took 26.318542ms WaitForService to wait for kubelet.
	I0816 22:42:31.581920   18923 kubeadm.go:547] duration metric: took 9.318562144s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:42:31.581949   18923 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:42:31.744656   18923 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:42:31.744683   18923 node_conditions.go:123] node cpu capacity is 2
	I0816 22:42:31.744699   18923 node_conditions.go:105] duration metric: took 162.745304ms to run NodePressure ...
	I0816 22:42:31.744708   18923 start.go:231] waiting for startup goroutines ...
	I0816 22:42:31.799332   18923 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0816 22:42:31.801873   18923 out.go:177] 
	W0816 22:42:31.802045   18923 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0816 22:42:31.803807   18923 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0816 22:42:31.805603   18923 out.go:177] * Done! kubectl is now configured to use "no-preload-20210816223156-6986" cluster and "default" namespace by default
	I0816 22:42:34.356504   18635 out.go:204]   - Configuring RBAC rules ...
	I0816 22:42:34.810198   18635 cni.go:93] Creating CNI manager for ""
	I0816 22:42:34.810227   18635 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:42:30.576523   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:33.074048   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:35.075110   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:31.178766   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:33.673945   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:35.674516   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:34.812149   18635 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:42:34.812218   18635 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:42:34.823097   18635 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:42:34.840052   18635 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:42:34.840175   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=old-k8s-version-20210816223154-6986 minikube.k8s.io/updated_at=2021_08_16T22_42_34_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:34.840179   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:35.279911   18635 ops.go:34] apiserver oom_adj: 16
	I0816 22:42:35.279930   18635 ops.go:39] adjusting apiserver oom_adj to -10
	I0816 22:42:35.279944   18635 ssh_runner.go:149] Run: /bin/bash -c "echo -10 | sudo tee /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:42:35.279997   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:35.887807   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:36.388228   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:36.888072   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:37.388131   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:37.888197   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:37.075407   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:39.574205   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:38.169080   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:40.669388   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:38.388192   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:38.887529   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:39.387314   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:39.887397   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:40.388222   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:40.887817   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:41.388165   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:41.887336   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:42.387710   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:42.887452   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:41.575892   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:44.074399   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:43.168677   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:45.674667   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:43.388233   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:43.888191   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:44.388190   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:44.888073   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:45.387300   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:45.887633   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:46.388266   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:46.887918   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:47.387283   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:47.887770   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:46.074552   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:48.573015   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:48.387776   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:48.888189   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:49.388262   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:49.887594   18635 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.14.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:42:50.137803   18635 kubeadm.go:985] duration metric: took 15.297678668s to wait for elevateKubeSystemPrivileges.
	I0816 22:42:50.137838   18635 kubeadm.go:392] StartCluster complete in 5m52.622280434s
	I0816 22:42:50.137865   18635 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:42:50.137996   18635 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:42:50.140032   18635 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:42:50.769953   18635 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "old-k8s-version-20210816223154-6986" rescaled to 1
	I0816 22:42:50.770028   18635 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.94.246 Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}
	I0816 22:42:50.771768   18635 out.go:177] * Verifying Kubernetes components...
	I0816 22:42:50.771833   18635 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:42:50.770075   18635 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:42:50.770097   18635 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:42:50.770295   18635 config.go:177] Loaded profile config "old-k8s-version-20210816223154-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0816 22:42:50.771981   18635 addons.go:59] Setting default-storageclass=true in profile "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.771981   18635 addons.go:59] Setting storage-provisioner=true in profile "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.771999   18635 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.772004   18635 addons.go:135] Setting addon storage-provisioner=true in "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.771995   18635 addons.go:59] Setting dashboard=true in profile "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.772027   18635 addons.go:135] Setting addon dashboard=true in "old-k8s-version-20210816223154-6986"
	W0816 22:42:50.772039   18635 addons.go:147] addon dashboard should already be in state true
	I0816 22:42:50.772074   18635 host.go:66] Checking if "old-k8s-version-20210816223154-6986" exists ...
	I0816 22:42:50.771981   18635 addons.go:59] Setting metrics-server=true in profile "old-k8s-version-20210816223154-6986"
	I0816 22:42:50.772106   18635 addons.go:135] Setting addon metrics-server=true in "old-k8s-version-20210816223154-6986"
	W0816 22:42:50.772118   18635 addons.go:147] addon metrics-server should already be in state true
	I0816 22:42:50.772143   18635 host.go:66] Checking if "old-k8s-version-20210816223154-6986" exists ...
	W0816 22:42:50.772012   18635 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:42:50.772202   18635 host.go:66] Checking if "old-k8s-version-20210816223154-6986" exists ...
	I0816 22:42:50.772450   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.772491   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.772514   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.772550   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.772562   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.772590   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.772850   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.772907   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.786384   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43919
	I0816 22:42:50.786896   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.787436   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.787463   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.787854   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.788085   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.788330   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36229
	I0816 22:42:50.788749   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.789268   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.789290   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.789622   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.790176   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.790222   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.795830   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42399
	I0816 22:42:50.795865   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46039
	I0816 22:42:50.796347   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.796355   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.796868   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.796888   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.796872   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.796936   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.797257   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.797329   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.797807   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.797848   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.797871   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.797906   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.799195   18635 addons.go:135] Setting addon default-storageclass=true in "old-k8s-version-20210816223154-6986"
	W0816 22:42:50.799218   18635 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:42:50.799243   18635 host.go:66] Checking if "old-k8s-version-20210816223154-6986" exists ...
	I0816 22:42:50.799640   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.799681   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.810531   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40999
	I0816 22:42:50.811204   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.811785   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.811802   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.812347   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.812540   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.815618   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44099
	I0816 22:42:50.815827   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34305
	I0816 22:42:50.816141   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.816227   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.816697   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.816714   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.816835   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.816854   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.817100   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.817172   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.817189   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .DriverName
	I0816 22:42:50.817352   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.819885   18635 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:42:50.817704   18635 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:42:50.820954   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .DriverName
	I0816 22:42:50.821662   18635 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:42:50.821713   18635 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:42:50.821719   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:42:50.821731   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:42:50.821750   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHHostname
	I0816 22:42:50.823437   18635 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:42:50.822272   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33579
	I0816 22:42:50.823493   18635 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:42:50.823505   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:42:50.823522   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHHostname
	I0816 22:42:50.823823   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.824293   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.824311   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.824702   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.824895   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.828911   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.828954   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .DriverName
	I0816 22:42:47.677798   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:50.171236   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:50.830871   18635 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:42:50.830990   18635 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:42:50.831003   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:42:50.831019   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHHostname
	I0816 22:42:50.829748   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:23:71", ip: ""} in network mk-old-k8s-version-20210816223154-6986: {Iface:virbr6 ExpiryTime:2021-08-16 23:36:29 +0000 UTC Type:0 Mac:52:54:00:bf:23:71 Iaid: IPaddr:192.168.94.246 Prefix:24 Hostname:old-k8s-version-20210816223154-6986 Clientid:01:52:54:00:bf:23:71}
	I0816 22:42:50.831084   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined IP address 192.168.94.246 and MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.829926   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHPort
	I0816 22:42:50.830586   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.831142   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:23:71", ip: ""} in network mk-old-k8s-version-20210816223154-6986: {Iface:virbr6 ExpiryTime:2021-08-16 23:36:29 +0000 UTC Type:0 Mac:52:54:00:bf:23:71 Iaid: IPaddr:192.168.94.246 Prefix:24 Hostname:old-k8s-version-20210816223154-6986 Clientid:01:52:54:00:bf:23:71}
	I0816 22:42:50.831171   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined IP address 192.168.94.246 and MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.831303   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHKeyPath
	I0816 22:42:50.831452   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHUsername
	I0816 22:42:50.831626   18635 sshutil.go:53] new ssh client: &{IP:192.168.94.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816223154-6986/id_rsa Username:docker}
	I0816 22:42:50.831935   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHPort
	I0816 22:42:50.832101   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHKeyPath
	I0816 22:42:50.832284   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHUsername
	I0816 22:42:50.832496   18635 sshutil.go:53] new ssh client: &{IP:192.168.94.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816223154-6986/id_rsa Username:docker}
	I0816 22:42:50.835565   18635 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34581
	I0816 22:42:50.836045   18635 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:42:50.836624   18635 main.go:130] libmachine: Using API Version  1
	I0816 22:42:50.836646   18635 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:42:50.836952   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.837022   18635 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:42:50.837210   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetState
	I0816 22:42:50.837385   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:23:71", ip: ""} in network mk-old-k8s-version-20210816223154-6986: {Iface:virbr6 ExpiryTime:2021-08-16 23:36:29 +0000 UTC Type:0 Mac:52:54:00:bf:23:71 Iaid: IPaddr:192.168.94.246 Prefix:24 Hostname:old-k8s-version-20210816223154-6986 Clientid:01:52:54:00:bf:23:71}
	I0816 22:42:50.837420   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined IP address 192.168.94.246 and MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.837596   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHPort
	I0816 22:42:50.837797   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHKeyPath
	I0816 22:42:50.837973   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHUsername
	I0816 22:42:50.838150   18635 sshutil.go:53] new ssh client: &{IP:192.168.94.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816223154-6986/id_rsa Username:docker}
	I0816 22:42:50.839968   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .DriverName
	I0816 22:42:50.840224   18635 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:42:50.840241   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:42:50.840256   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHHostname
	I0816 22:42:50.846248   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.846622   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bf:23:71", ip: ""} in network mk-old-k8s-version-20210816223154-6986: {Iface:virbr6 ExpiryTime:2021-08-16 23:36:29 +0000 UTC Type:0 Mac:52:54:00:bf:23:71 Iaid: IPaddr:192.168.94.246 Prefix:24 Hostname:old-k8s-version-20210816223154-6986 Clientid:01:52:54:00:bf:23:71}
	I0816 22:42:50.846648   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | domain old-k8s-version-20210816223154-6986 has defined IP address 192.168.94.246 and MAC address 52:54:00:bf:23:71 in network mk-old-k8s-version-20210816223154-6986
	I0816 22:42:50.846901   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHPort
	I0816 22:42:50.847072   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHKeyPath
	I0816 22:42:50.847256   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .GetSSHUsername
	I0816 22:42:50.847384   18635 sshutil.go:53] new ssh client: &{IP:192.168.94.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/old-k8s-version-20210816223154-6986/id_rsa Username:docker}
	I0816 22:42:51.069324   18635 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-20210816223154-6986" to be "Ready" ...
	I0816 22:42:51.069363   18635 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:42:51.074198   18635 node_ready.go:49] node "old-k8s-version-20210816223154-6986" has status "Ready":"True"
	I0816 22:42:51.074219   18635 node_ready.go:38] duration metric: took 4.853226ms waiting for node "old-k8s-version-20210816223154-6986" to be "Ready" ...
	I0816 22:42:51.074228   18635 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:42:51.079427   18635 pod_ready.go:78] waiting up to 6m0s for pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace to be "Ready" ...
	I0816 22:42:51.095977   18635 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:42:51.095994   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:42:51.114667   18635 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:42:51.127402   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:42:51.127423   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:42:51.139080   18635 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:42:51.142203   18635 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:42:51.142227   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:42:51.184024   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:42:51.184049   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:42:51.229690   18635 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:42:51.229719   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:42:51.258163   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:42:51.258186   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:42:51.292848   18635 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:42:51.348950   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:42:51.348979   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:42:51.432982   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:42:51.433017   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:42:51.500730   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:42:51.500762   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:42:51.566104   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:42:51.566132   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:42:51.669547   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:42:51.669569   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:42:51.755011   18635 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:42:51.755042   18635 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:42:51.807684   18635 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:42:52.571594   18635 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.14.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.502197835s)
	I0816 22:42:52.571636   18635 start.go:728] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS
	I0816 22:42:52.759651   18635 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.644944376s)
	I0816 22:42:52.759687   18635 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.620572399s)
	I0816 22:42:52.759727   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.759743   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.759751   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.759765   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.760012   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.760058   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:52.760071   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.760080   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.760115   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.760131   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:52.760156   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.760170   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.761684   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.761690   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | Closing plugin on server side
	I0816 22:42:52.761704   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:52.761719   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | Closing plugin on server side
	I0816 22:42:52.761689   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.761794   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:52.761806   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:52.761817   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:52.762085   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:52.762108   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:53.390381   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:53.699731   18635 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.406829667s)
	I0816 22:42:53.699820   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:53.699836   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:53.700202   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:53.700222   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:53.700238   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:53.700249   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:53.700503   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:53.700523   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:53.700538   18635 addons.go:313] Verifying addon metrics-server=true in "old-k8s-version-20210816223154-6986"
	I0816 22:42:54.131359   18635 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.14.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.323617191s)
	I0816 22:42:54.131419   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:54.131434   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:54.131720   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) DBG | Closing plugin on server side
	I0816 22:42:54.131759   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:54.131767   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:54.131782   18635 main.go:130] libmachine: Making call to close driver server
	I0816 22:42:54.131793   18635 main.go:130] libmachine: (old-k8s-version-20210816223154-6986) Calling .Close
	I0816 22:42:54.132029   18635 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:42:54.132048   18635 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:42:50.574063   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:53.075372   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:52.670047   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:54.673975   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:54.134079   18635 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0816 22:42:54.134104   18635 addons.go:344] enableAddons completed in 3.364015112s
	I0816 22:42:55.589126   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:57.594328   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:55.581048   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:58.075675   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:57.167077   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:59.670483   18929 pod_ready.go:102] pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace has status "Ready":"False"
	I0816 22:42:59.594568   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:02.093248   18635 pod_ready.go:102] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:00.574293   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:02.574884   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:05.075277   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:02.159000   18929 pod_ready.go:81] duration metric: took 4m0.401738783s waiting for pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace to be "Ready" ...
	E0816 22:43:02.159021   18929 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-9xpsr" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:43:02.159049   18929 pod_ready.go:38] duration metric: took 4m41.323642164s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:02.159079   18929 kubeadm.go:604] restartCluster took 5m14.823391905s
	W0816 22:43:02.159203   18929 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:43:02.159238   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0816 22:43:05.238090   18929 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.078818721s)
	I0816 22:43:05.238168   18929 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:43:05.256580   18929 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:43:05.256649   18929 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:43:05.300644   18929 cri.go:76] found id: ""
	I0816 22:43:05.300755   18929 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:43:05.308191   18929 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:43:05.315888   18929 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:43:05.315936   18929 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0816 22:43:05.885054   18929 out.go:204]   - Generating certificates and keys ...
	I0816 22:43:04.591211   18635 pod_ready.go:92] pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:04.591250   18635 pod_ready.go:81] duration metric: took 13.511789308s waiting for pod "coredns-fb8b8dccf-r87qj" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:04.591266   18635 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jmg6d" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:04.598816   18635 pod_ready.go:92] pod "kube-proxy-jmg6d" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:04.598833   18635 pod_ready.go:81] duration metric: took 7.559474ms waiting for pod "kube-proxy-jmg6d" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:04.598842   18635 pod_ready.go:38] duration metric: took 13.524600915s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:04.598861   18635 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:43:04.598908   18635 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:43:04.615708   18635 api_server.go:70] duration metric: took 13.845635855s to wait for apiserver process to appear ...
	I0816 22:43:04.615739   18635 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:43:04.615748   18635 api_server.go:239] Checking apiserver healthz at https://192.168.94.246:8443/healthz ...
	I0816 22:43:04.624860   18635 api_server.go:265] https://192.168.94.246:8443/healthz returned 200:
	ok
	I0816 22:43:04.626456   18635 api_server.go:139] control plane version: v1.14.0
	I0816 22:43:04.626478   18635 api_server.go:129] duration metric: took 10.733471ms to wait for apiserver health ...
	I0816 22:43:04.626487   18635 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:43:04.631832   18635 system_pods.go:59] 4 kube-system pods found
	I0816 22:43:04.631861   18635 system_pods.go:61] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.631867   18635 system_pods.go:61] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.631877   18635 system_pods.go:61] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:04.631883   18635 system_pods.go:61] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.631892   18635 system_pods.go:74] duration metric: took 5.399191ms to wait for pod list to return data ...
	I0816 22:43:04.631901   18635 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:43:04.635992   18635 default_sa.go:45] found service account: "default"
	I0816 22:43:04.636015   18635 default_sa.go:55] duration metric: took 4.107562ms for default service account to be created ...
	I0816 22:43:04.636025   18635 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:43:04.640667   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:04.640691   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.640697   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.640704   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:04.640709   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.640726   18635 retry.go:31] will retry after 305.063636ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:04.951327   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:04.951357   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.951365   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.951377   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:04.951384   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:04.951402   18635 retry.go:31] will retry after 338.212508ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:05.295109   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:05.295143   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.295154   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.295165   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:05.295174   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.295193   18635 retry.go:31] will retry after 378.459802ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:05.683391   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:05.683423   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.683431   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.683442   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:05.683452   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:05.683472   18635 retry.go:31] will retry after 469.882201ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:06.158721   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:06.158752   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.158757   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.158765   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:06.158770   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.158786   18635 retry.go:31] will retry after 667.365439ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:06.831740   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:06.831771   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.831781   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.831790   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:06.831799   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:06.831818   18635 retry.go:31] will retry after 597.243124ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:07.434457   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:07.434482   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:07.434487   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:07.434494   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:07.434499   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:07.434513   18635 retry.go:31] will retry after 789.889932ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:07.075753   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:09.575726   19204 pod_ready.go:102] pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:06.996973   18929 out.go:204]   - Booting up control plane ...
	I0816 22:43:08.229786   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:08.229819   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:08.229827   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:08.229840   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:08.229845   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:08.229863   18635 retry.go:31] will retry after 951.868007ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:09.187817   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:09.187852   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:09.187862   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:09.187873   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:09.187878   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:09.187895   18635 retry.go:31] will retry after 1.341783893s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:10.534567   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:10.534608   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:10.534615   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:10.534627   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:10.534634   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:10.534652   18635 retry.go:31] will retry after 1.876813009s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:12.418546   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:12.418572   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:12.418579   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:12.418590   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:12.418596   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:12.418612   18635 retry.go:31] will retry after 2.6934314s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:11.066632   19204 pod_ready.go:81] duration metric: took 4m0.008014176s waiting for pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace to be "Ready" ...
	E0816 22:43:11.066660   19204 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-7c784ccb57-rmrr6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0816 22:43:11.066679   19204 pod_ready.go:38] duration metric: took 4m27.623084704s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:11.066704   19204 kubeadm.go:604] restartCluster took 5m3.415779611s
	W0816 22:43:11.066819   19204 out.go:242] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0816 22:43:11.066856   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force"
	I0816 22:43:14.269873   19204 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm reset --cri-socket /run/containerd/containerd.sock --force": (3.202987817s)
	I0816 22:43:14.269950   19204 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
	I0816 22:43:14.288386   19204 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:43:14.288469   19204 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:43:14.333856   19204 cri.go:76] found id: ""
	I0816 22:43:14.333935   19204 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:43:14.343737   19204 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:43:14.352599   19204 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:43:14.352646   19204 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.21.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem"
	I0816 22:43:14.930093   19204 out.go:204]   - Generating certificates and keys ...
	I0816 22:43:15.118830   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:15.118862   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:15.118872   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:15.118882   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:15.118889   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:15.118907   18635 retry.go:31] will retry after 2.494582248s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:17.619339   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:17.619375   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:17.619384   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:17.619395   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:17.619403   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:17.619422   18635 retry.go:31] will retry after 3.420895489s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:15.729873   19204 out.go:204]   - Booting up control plane ...
	I0816 22:43:21.047237   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:21.047269   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:21.047276   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:21.047287   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:21.047294   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:21.047310   18635 retry.go:31] will retry after 4.133785681s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:22.636356   18929 out.go:204]   - Configuring RBAC rules ...
	I0816 22:43:23.371015   18929 cni.go:93] Creating CNI manager for ""
	I0816 22:43:23.371043   18929 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:43:23.373006   18929 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:43:23.373076   18929 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:43:23.386712   18929 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:43:23.415554   18929 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:43:23.415693   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:23.415773   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=embed-certs-20210816223333-6986 minikube.k8s.io/updated_at=2021_08_16T22_43_23_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:24.042222   18929 ops.go:34] apiserver oom_adj: -16
	I0816 22:43:24.042207   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:24.699493   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:25.199877   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:25.699926   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:25.189718   18635 system_pods.go:86] 4 kube-system pods found
	I0816 22:43:25.189751   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:25.189758   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:25.189768   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:25.189775   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:25.189795   18635 retry.go:31] will retry after 5.595921491s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:26.199444   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:26.699547   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:27.199378   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:27.699869   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:28.200370   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:28.700011   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:29.199882   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:29.700066   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:30.200161   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:30.699359   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:31.887219   19204 out.go:204]   - Configuring RBAC rules ...
	I0816 22:43:32.571790   19204 cni.go:93] Creating CNI manager for ""
	I0816 22:43:32.571817   19204 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:43:30.804838   18635 system_pods.go:86] 8 kube-system pods found
	I0816 22:43:30.804876   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:30.804884   18635 system_pods.go:89] "etcd-old-k8s-version-20210816223154-6986" [61433b17-fee3-11eb-bea8-525400bf2371] Pending
	I0816 22:43:30.804891   18635 system_pods.go:89] "kube-apiserver-old-k8s-version-20210816223154-6986" [5e48aade-fee3-11eb-bea8-525400bf2371] Pending
	I0816 22:43:30.804897   18635 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210816223154-6986" [5e48d2c6-fee3-11eb-bea8-525400bf2371] Pending
	I0816 22:43:30.804902   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:30.804908   18635 system_pods.go:89] "kube-scheduler-old-k8s-version-20210816223154-6986" [60110a1b-fee3-11eb-bea8-525400bf2371] Pending
	I0816 22:43:30.804918   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:30.804925   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:30.804943   18635 retry.go:31] will retry after 6.3346098s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0816 22:43:32.573869   19204 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:43:32.573957   19204 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:43:32.585155   19204 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:43:32.601590   19204 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:43:32.601652   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:32.601677   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl label nodes minikube.k8s.io/version=v1.22.0 minikube.k8s.io/commit=fd21d4bdd7b0c16bb6b4998193bc3e21aa07dd48 minikube.k8s.io/name=default-k8s-different-port-20210816223418-6986 minikube.k8s.io/updated_at=2021_08_16T22_43_32_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:32.631177   19204 ops.go:34] apiserver oom_adj: -16
	I0816 22:43:33.115780   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:33.764597   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:34.265250   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:34.764717   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:31.200176   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:31.700178   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:32.200029   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:32.699789   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:33.200341   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:33.699709   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:34.199959   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:34.699635   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:35.199401   18929 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:35.497436   18929 kubeadm.go:985] duration metric: took 12.081799779s to wait for elevateKubeSystemPrivileges.
	I0816 22:43:35.497485   18929 kubeadm.go:392] StartCluster complete in 5m48.214136187s
	I0816 22:43:35.497508   18929 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:43:35.497637   18929 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:43:35.500294   18929 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:43:36.034903   18929 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "embed-certs-20210816223333-6986" rescaled to 1
	I0816 22:43:36.034983   18929 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.105.129 Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:43:36.036731   18929 out.go:177] * Verifying Kubernetes components...
	I0816 22:43:36.035020   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:43:36.036813   18929 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:43:36.035043   18929 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:43:36.036910   18929 addons.go:59] Setting storage-provisioner=true in profile "embed-certs-20210816223333-6986"
	I0816 22:43:36.036926   18929 addons.go:59] Setting default-storageclass=true in profile "embed-certs-20210816223333-6986"
	I0816 22:43:36.036937   18929 addons.go:135] Setting addon storage-provisioner=true in "embed-certs-20210816223333-6986"
	I0816 22:43:36.036942   18929 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-20210816223333-6986"
	W0816 22:43:36.036948   18929 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:43:36.036978   18929 host.go:66] Checking if "embed-certs-20210816223333-6986" exists ...
	I0816 22:43:36.037395   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.037430   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.037443   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.037445   18929 addons.go:59] Setting metrics-server=true in profile "embed-certs-20210816223333-6986"
	I0816 22:43:36.037462   18929 addons.go:135] Setting addon metrics-server=true in "embed-certs-20210816223333-6986"
	I0816 22:43:36.037464   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	W0816 22:43:36.037471   18929 addons.go:147] addon metrics-server should already be in state true
	I0816 22:43:36.036912   18929 addons.go:59] Setting dashboard=true in profile "embed-certs-20210816223333-6986"
	I0816 22:43:36.037504   18929 addons.go:135] Setting addon dashboard=true in "embed-certs-20210816223333-6986"
	W0816 22:43:36.037509   18929 addons.go:147] addon dashboard should already be in state true
	I0816 22:43:36.035195   18929 config.go:177] Loaded profile config "embed-certs-20210816223333-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:43:36.037546   18929 host.go:66] Checking if "embed-certs-20210816223333-6986" exists ...
	I0816 22:43:36.037680   18929 host.go:66] Checking if "embed-certs-20210816223333-6986" exists ...
	I0816 22:43:36.037934   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.037965   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.038094   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.038128   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.052922   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45791
	I0816 22:43:36.053393   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.053967   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.053996   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.054376   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.054999   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.055044   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.057606   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34275
	I0816 22:43:36.057965   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.058476   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.058504   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.058889   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.059518   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.059555   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.061564   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35827
	I0816 22:43:36.061953   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.062427   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.062448   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.062776   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.062919   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetState
	I0816 22:43:36.067479   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35939
	I0816 22:43:36.067916   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.068397   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.068420   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.068756   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.069319   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.069365   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.070906   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46287
	I0816 22:43:36.071487   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.071940   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.071962   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.072029   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43295
	I0816 22:43:36.072345   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.072346   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.072513   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetState
	I0816 22:43:36.072847   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.072869   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.073161   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.073332   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetState
	I0816 22:43:36.077207   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:43:36.077344   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:43:36.079180   18929 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:43:36.080548   18929 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:43:36.079295   18929 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:43:36.080582   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:43:36.080603   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:43:36.081867   18929 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:43:36.081926   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:43:36.081938   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:43:36.081954   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:43:36.082858   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37409
	I0816 22:43:36.083299   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.083845   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.083868   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.084213   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.084387   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetState
	I0816 22:43:36.086977   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.087634   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:43:36.087699   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:43:36.087722   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.087759   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:43:36.087803   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:43:36.087949   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:43:37.147660   18635 system_pods.go:86] 8 kube-system pods found
	I0816 22:43:37.147701   18635 system_pods.go:89] "coredns-fb8b8dccf-r87qj" [48ebe67d-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:37.147710   18635 system_pods.go:89] "etcd-old-k8s-version-20210816223154-6986" [61433b17-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:37.147718   18635 system_pods.go:89] "kube-apiserver-old-k8s-version-20210816223154-6986" [5e48aade-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:37.147724   18635 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210816223154-6986" [5e48d2c6-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:37.147730   18635 system_pods.go:89] "kube-proxy-jmg6d" [4905cd4b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:37.147736   18635 system_pods.go:89] "kube-scheduler-old-k8s-version-20210816223154-6986" [60110a1b-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:37.147745   18635 system_pods.go:89] "metrics-server-8546d8b77b-vpvp5" [4b7525e3-fee3-11eb-bea8-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:37.147755   18635 system_pods.go:89] "storage-provisioner" [4a660c8a-fee3-11eb-bea8-525400bf2371] Running
	I0816 22:43:37.147764   18635 system_pods.go:126] duration metric: took 32.511733609s to wait for k8s-apps to be running ...
	I0816 22:43:37.147783   18635 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:43:37.147836   18635 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:43:37.164370   18635 system_svc.go:56] duration metric: took 16.579311ms WaitForService to wait for kubelet.
	I0816 22:43:37.164403   18635 kubeadm.go:547] duration metric: took 46.394336574s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:43:37.164433   18635 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:43:37.168097   18635 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:43:37.168129   18635 node_conditions.go:123] node cpu capacity is 2
	I0816 22:43:37.168144   18635 node_conditions.go:105] duration metric: took 3.70586ms to run NodePressure ...
	I0816 22:43:37.168156   18635 start.go:231] waiting for startup goroutines ...
	I0816 22:43:37.217144   18635 start.go:462] kubectl: 1.20.5, cluster: 1.14.0 (minor skew: 6)
	I0816 22:43:37.219305   18635 out.go:177] 
	W0816 22:43:37.219480   18635 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.14.0.
	I0816 22:43:37.221278   18635 out.go:177]   - Want kubectl v1.14.0? Try 'minikube kubectl -- get pods -A'
	I0816 22:43:37.223010   18635 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-20210816223154-6986" cluster and "default" namespace by default
	I0816 22:43:35.265455   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:35.765450   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:36.264605   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:36.764601   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:37.265049   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:37.764595   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:38.265287   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:38.764994   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:39.265056   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:39.765476   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:36.089340   18929 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:43:36.089400   18929 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:43:36.089413   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:43:36.088130   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:43:36.089430   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:43:36.088890   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.089473   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:43:36.089505   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.089703   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:43:36.089898   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:43:36.090090   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:43:36.090267   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:43:36.094836   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.095297   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:43:36.095323   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.095512   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:43:36.095645   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:43:36.095759   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:43:36.095851   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:43:36.098057   18929 addons.go:135] Setting addon default-storageclass=true in "embed-certs-20210816223333-6986"
	W0816 22:43:36.098079   18929 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:43:36.098104   18929 host.go:66] Checking if "embed-certs-20210816223333-6986" exists ...
	I0816 22:43:36.098559   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.098603   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.109741   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:39575
	I0816 22:43:36.110180   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.110794   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.110819   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.111190   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.111821   18929 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:36.111864   18929 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:36.123621   18929 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45183
	I0816 22:43:36.124053   18929 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:36.124503   18929 main.go:130] libmachine: Using API Version  1
	I0816 22:43:36.124519   18929 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:36.124829   18929 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:36.125022   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetState
	I0816 22:43:36.128253   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .DriverName
	I0816 22:43:36.128476   18929 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:43:36.128494   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:43:36.128513   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHHostname
	I0816 22:43:36.134156   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.134493   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:30:a1:6d", ip: ""} in network mk-embed-certs-20210816223333-6986: {Iface:virbr7 ExpiryTime:2021-08-16 23:37:20 +0000 UTC Type:0 Mac:52:54:00:30:a1:6d Iaid: IPaddr:192.168.105.129 Prefix:24 Hostname:embed-certs-20210816223333-6986 Clientid:01:52:54:00:30:a1:6d}
	I0816 22:43:36.134521   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | domain embed-certs-20210816223333-6986 has defined IP address 192.168.105.129 and MAC address 52:54:00:30:a1:6d in network mk-embed-certs-20210816223333-6986
	I0816 22:43:36.134626   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHPort
	I0816 22:43:36.134834   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHKeyPath
	I0816 22:43:36.135010   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .GetSSHUsername
	I0816 22:43:36.135176   18929 sshutil.go:53] new ssh client: &{IP:192.168.105.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/embed-certs-20210816223333-6986/id_rsa Username:docker}
	I0816 22:43:36.334796   18929 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:43:36.462564   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:43:36.462619   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:43:36.510558   18929 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:43:36.513334   18929 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:43:36.513356   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:43:36.551208   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:43:36.551256   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:43:36.570189   18929 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:43:36.570216   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:43:36.657218   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:43:36.657250   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:43:36.692197   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:43:36.692227   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:43:36.774111   18929 node_ready.go:35] waiting up to 6m0s for node "embed-certs-20210816223333-6986" to be "Ready" ...
	I0816 22:43:36.774340   18929 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:43:36.790295   18929 node_ready.go:49] node "embed-certs-20210816223333-6986" has status "Ready":"True"
	I0816 22:43:36.790320   18929 node_ready.go:38] duration metric: took 16.177495ms waiting for node "embed-certs-20210816223333-6986" to be "Ready" ...
	I0816 22:43:36.790335   18929 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:36.797297   18929 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-6zv97" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:36.858095   18929 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:43:36.858120   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:43:36.981263   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:43:36.981292   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:43:37.007726   18929 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:43:37.229172   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:43:37.229198   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:43:37.412428   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:43:37.412464   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:43:37.604490   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:43:37.604516   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:43:37.864046   18929 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:43:37.864072   18929 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:43:37.954509   18929 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:43:38.628148   18929 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.293312909s)
	I0816 22:43:38.628197   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:38.628206   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:38.628466   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:38.628488   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:38.628499   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:38.628509   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:38.628847   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:38.628869   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:38.797491   18929 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.286893282s)
	I0816 22:43:38.797551   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:38.797565   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:38.797846   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Closing plugin on server side
	I0816 22:43:38.797888   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:38.797896   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:38.797904   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:38.797913   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:38.798184   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:38.798203   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:38.798216   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:38.798226   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:38.798467   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:38.798483   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:38.814757   18929 pod_ready.go:102] pod "coredns-558bd4d5db-6zv97" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:39.223137   18929 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.105.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.448766478s)
	I0816 22:43:39.223172   18929 start.go:728] {"host.minikube.internal": 192.168.105.1} host record injected into CoreDNS
	I0816 22:43:39.504206   18929 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.496431754s)
	I0816 22:43:39.504273   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:39.504297   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:39.504564   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:39.504585   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:39.504598   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:39.504611   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:39.504854   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:39.504863   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Closing plugin on server side
	I0816 22:43:39.504875   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:39.504890   18929 addons.go:313] Verifying addon metrics-server=true in "embed-certs-20210816223333-6986"
	I0816 22:43:39.815632   18929 pod_ready.go:97] error getting pod "coredns-558bd4d5db-6zv97" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-6zv97" not found
	I0816 22:43:39.815668   18929 pod_ready.go:81] duration metric: took 3.018337051s waiting for pod "coredns-558bd4d5db-6zv97" in "kube-system" namespace to be "Ready" ...
	E0816 22:43:39.815681   18929 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-558bd4d5db-6zv97" in "kube-system" namespace (skipping!): pods "coredns-558bd4d5db-6zv97" not found
	I0816 22:43:39.815691   18929 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-mfshm" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:40.809470   18929 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.854902802s)
	I0816 22:43:40.809543   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:40.809566   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:40.811279   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:40.811299   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:40.811310   18929 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:40.811320   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) Calling .Close
	I0816 22:43:40.811328   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Closing plugin on server side
	I0816 22:43:40.811538   18929 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:40.811553   18929 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:40.811561   18929 main.go:130] libmachine: (embed-certs-20210816223333-6986) DBG | Closing plugin on server side
	I0816 22:43:40.813830   18929 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0816 22:43:40.813854   18929 addons.go:344] enableAddons completed in 4.778818205s
	I0816 22:43:41.867317   18929 pod_ready.go:102] pod "coredns-558bd4d5db-mfshm" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:43.368862   18929 pod_ready.go:92] pod "coredns-558bd4d5db-mfshm" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:43.368890   18929 pod_ready.go:81] duration metric: took 3.553191611s waiting for pod "coredns-558bd4d5db-mfshm" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.368903   18929 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.378704   18929 pod_ready.go:92] pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:43.378725   18929 pod_ready.go:81] duration metric: took 9.814161ms waiting for pod "etcd-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.378739   18929 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.402730   18929 pod_ready.go:92] pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:43.402755   18929 pod_ready.go:81] duration metric: took 24.005322ms waiting for pod "kube-apiserver-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.402769   18929 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.411087   18929 pod_ready.go:92] pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:43.411108   18929 pod_ready.go:81] duration metric: took 8.330836ms waiting for pod "kube-controller-manager-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.411120   18929 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zwcwz" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.420161   18929 pod_ready.go:92] pod "kube-proxy-zwcwz" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:43.420183   18929 pod_ready.go:81] duration metric: took 9.054321ms waiting for pod "kube-proxy-zwcwz" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.420195   18929 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.764290   18929 pod_ready.go:92] pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:43.764315   18929 pod_ready.go:81] duration metric: took 344.109074ms waiting for pod "kube-scheduler-embed-certs-20210816223333-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:43.764327   18929 pod_ready.go:38] duration metric: took 6.973978865s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:43.764347   18929 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:43:43.764398   18929 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:43:43.785185   18929 api_server.go:70] duration metric: took 7.750163085s to wait for apiserver process to appear ...
	I0816 22:43:43.785212   18929 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:43:43.785222   18929 api_server.go:239] Checking apiserver healthz at https://192.168.105.129:8443/healthz ...
	I0816 22:43:43.795735   18929 api_server.go:265] https://192.168.105.129:8443/healthz returned 200:
	ok
	I0816 22:43:43.797225   18929 api_server.go:139] control plane version: v1.21.3
	I0816 22:43:43.797243   18929 api_server.go:129] duration metric: took 12.025112ms to wait for apiserver health ...
	I0816 22:43:43.797252   18929 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:43:43.971546   18929 system_pods.go:59] 8 kube-system pods found
	I0816 22:43:43.971578   18929 system_pods.go:61] "coredns-558bd4d5db-mfshm" [cb9ac226-b63f-4de1-b4af-b8e2bf280d95] Running
	I0816 22:43:43.971584   18929 system_pods.go:61] "etcd-embed-certs-20210816223333-6986" [333a4b44-c417-46e6-8653-c1d24391c7ca] Running
	I0816 22:43:43.971590   18929 system_pods.go:61] "kube-apiserver-embed-certs-20210816223333-6986" [414c58e9-8dcf-4f0c-9a5e-ff21a694067d] Running
	I0816 22:43:43.971596   18929 system_pods.go:61] "kube-controller-manager-embed-certs-20210816223333-6986" [c80d067f-ee6a-4e6a-b062-c2ff64c6bd81] Running
	I0816 22:43:43.971601   18929 system_pods.go:61] "kube-proxy-zwcwz" [f85562a3-8576-4dbf-a2b2-3f6a3d199df3] Running
	I0816 22:43:43.971608   18929 system_pods.go:61] "kube-scheduler-embed-certs-20210816223333-6986" [92b9b318-e6e4-4891-9609-5fe26593bcdb] Running
	I0816 22:43:43.971621   18929 system_pods.go:61] "metrics-server-7c784ccb57-qfrpw" [abb75357-7b33-4327-aa7f-8e9c15a192f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:43.971632   18929 system_pods.go:61] "storage-provisioner" [f3fc0038-f88e-416f-81e3-fb387b0e010a] Running
	I0816 22:43:43.971639   18929 system_pods.go:74] duration metric: took 174.380965ms to wait for pod list to return data ...
	I0816 22:43:43.971647   18929 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:43:44.164541   18929 default_sa.go:45] found service account: "default"
	I0816 22:43:44.164564   18929 default_sa.go:55] duration metric: took 192.910888ms for default service account to be created ...
	I0816 22:43:44.164584   18929 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:43:44.367138   18929 system_pods.go:86] 8 kube-system pods found
	I0816 22:43:44.367172   18929 system_pods.go:89] "coredns-558bd4d5db-mfshm" [cb9ac226-b63f-4de1-b4af-b8e2bf280d95] Running
	I0816 22:43:44.367181   18929 system_pods.go:89] "etcd-embed-certs-20210816223333-6986" [333a4b44-c417-46e6-8653-c1d24391c7ca] Running
	I0816 22:43:44.367190   18929 system_pods.go:89] "kube-apiserver-embed-certs-20210816223333-6986" [414c58e9-8dcf-4f0c-9a5e-ff21a694067d] Running
	I0816 22:43:44.367197   18929 system_pods.go:89] "kube-controller-manager-embed-certs-20210816223333-6986" [c80d067f-ee6a-4e6a-b062-c2ff64c6bd81] Running
	I0816 22:43:44.367204   18929 system_pods.go:89] "kube-proxy-zwcwz" [f85562a3-8576-4dbf-a2b2-3f6a3d199df3] Running
	I0816 22:43:44.367211   18929 system_pods.go:89] "kube-scheduler-embed-certs-20210816223333-6986" [92b9b318-e6e4-4891-9609-5fe26593bcdb] Running
	I0816 22:43:44.367229   18929 system_pods.go:89] "metrics-server-7c784ccb57-qfrpw" [abb75357-7b33-4327-aa7f-8e9c15a192f8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:44.367239   18929 system_pods.go:89] "storage-provisioner" [f3fc0038-f88e-416f-81e3-fb387b0e010a] Running
	I0816 22:43:44.367248   18929 system_pods.go:126] duration metric: took 202.65882ms to wait for k8s-apps to be running ...
	I0816 22:43:44.367259   18929 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:43:44.367307   18929 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:43:44.381654   18929 system_svc.go:56] duration metric: took 14.389765ms WaitForService to wait for kubelet.
	I0816 22:43:44.381678   18929 kubeadm.go:547] duration metric: took 8.346663342s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:43:44.381702   18929 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:43:44.563414   18929 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:43:44.563447   18929 node_conditions.go:123] node cpu capacity is 2
	I0816 22:43:44.563461   18929 node_conditions.go:105] duration metric: took 181.753579ms to run NodePressure ...
	I0816 22:43:44.563473   18929 start.go:231] waiting for startup goroutines ...
	I0816 22:43:44.614237   18929 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:43:44.616600   18929 out.go:177] * Done! kubectl is now configured to use "embed-certs-20210816223333-6986" cluster and "default" namespace by default
	I0816 22:43:40.264690   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:40.765483   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:41.264614   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:41.764581   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:42.265395   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:42.764674   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:43.265319   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:43.765315   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:44.265020   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:44.764726   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:45.265506   19204 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.21.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0816 22:43:45.590495   19204 kubeadm.go:985] duration metric: took 12.988891092s to wait for elevateKubeSystemPrivileges.
	I0816 22:43:45.590529   19204 kubeadm.go:392] StartCluster complete in 5m37.979340771s
	I0816 22:43:45.590548   19204 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:43:45.590642   19204 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:43:45.593541   19204 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0816 22:43:45.657324   19204 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0816 22:43:46.665400   19204 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "default-k8s-different-port-20210816223418-6986" rescaled to 1
	I0816 22:43:46.665482   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:43:46.665515   19204 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.50.186 Port:8444 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}
	I0816 22:43:46.667711   19204 out.go:177] * Verifying Kubernetes components...
	I0816 22:43:46.667773   19204 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:43:46.665580   19204 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:43:46.667837   19204 addons.go:59] Setting dashboard=true in profile "default-k8s-different-port-20210816223418-6986"
	I0816 22:43:46.667852   19204 addons.go:59] Setting default-storageclass=true in profile "default-k8s-different-port-20210816223418-6986"
	I0816 22:43:46.667860   19204 addons.go:135] Setting addon dashboard=true in "default-k8s-different-port-20210816223418-6986"
	I0816 22:43:46.667868   19204 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-different-port-20210816223418-6986"
	I0816 22:43:46.667838   19204 addons.go:59] Setting storage-provisioner=true in profile "default-k8s-different-port-20210816223418-6986"
	I0816 22:43:46.667885   19204 addons.go:135] Setting addon storage-provisioner=true in "default-k8s-different-port-20210816223418-6986"
	W0816 22:43:46.667894   19204 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:43:46.667927   19204 host.go:66] Checking if "default-k8s-different-port-20210816223418-6986" exists ...
	I0816 22:43:46.668329   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.668351   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.668368   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.668386   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.665780   19204 config.go:177] Loaded profile config "default-k8s-different-port-20210816223418-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:43:46.668451   19204 addons.go:59] Setting metrics-server=true in profile "default-k8s-different-port-20210816223418-6986"
	W0816 22:43:46.667870   19204 addons.go:147] addon dashboard should already be in state true
	I0816 22:43:46.668473   19204 addons.go:135] Setting addon metrics-server=true in "default-k8s-different-port-20210816223418-6986"
	W0816 22:43:46.668483   19204 addons.go:147] addon metrics-server should already be in state true
	I0816 22:43:46.668492   19204 host.go:66] Checking if "default-k8s-different-port-20210816223418-6986" exists ...
	I0816 22:43:46.668520   19204 host.go:66] Checking if "default-k8s-different-port-20210816223418-6986" exists ...
	I0816 22:43:46.668864   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.668905   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.668950   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.668990   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.689974   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42521
	I0816 22:43:46.690669   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.691280   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.691314   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.691679   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.692276   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.692315   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.692464   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43883
	I0816 22:43:46.693031   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.693526   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.693553   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.693968   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.694137   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetState
	I0816 22:43:46.705753   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38281
	I0816 22:43:46.706172   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.707061   19204 addons.go:135] Setting addon default-storageclass=true in "default-k8s-different-port-20210816223418-6986"
	W0816 22:43:46.707082   19204 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:43:46.707108   19204 host.go:66] Checking if "default-k8s-different-port-20210816223418-6986" exists ...
	I0816 22:43:46.707465   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.707503   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.707516   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.707545   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.707576   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44259
	I0816 22:43:46.707845   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.707927   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.708047   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetState
	I0816 22:43:46.708442   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.708498   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.708850   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40817
	I0816 22:43:46.708875   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.709295   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.709795   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.709831   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.709802   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.709896   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.710319   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.710841   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.710885   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.712390   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:43:46.714605   19204 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:43:46.714712   19204 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:43:46.714728   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:43:46.714749   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:43:46.721156   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.721380   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:43:46.721409   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.721629   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:43:46.721735   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:43:46.721864   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:43:46.721924   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:43:46.729886   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41981
	I0816 22:43:46.730281   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.730709   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.730725   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.731239   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.731805   19204 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:43:46.731916   19204 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:43:46.731997   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0816 22:43:46.732368   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.732825   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.732847   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.733209   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.733449   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetState
	I0816 22:43:46.734015   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:32775
	I0816 22:43:46.734430   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.735096   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.735146   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.735539   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.735710   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetState
	I0816 22:43:46.737120   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:43:46.739055   19204 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:43:46.738848   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:43:46.739120   19204 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:43:46.739134   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:43:46.739158   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:43:46.740784   19204 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:43:46.742206   19204 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:43:46.742257   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:43:46.742270   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:43:46.742288   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:43:46.745626   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.746290   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:43:46.746384   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.746885   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:43:46.747264   19204 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41917
	I0816 22:43:46.747662   19204 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:43:46.748053   19204 main.go:130] libmachine: Using API Version  1
	I0816 22:43:46.748065   19204 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:43:46.748398   19204 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:43:46.748516   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetState
	I0816 22:43:46.748635   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.749011   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:43:46.749029   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.749196   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:43:46.749309   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:43:46.749445   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:43:46.749576   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:43:46.751724   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .DriverName
	I0816 22:43:46.751878   19204 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:43:46.751885   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:43:46.751895   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHHostname
	I0816 22:43:46.755264   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:43:46.755420   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:43:46.755543   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:43:46.757535   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.757844   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ed:d4:05", ip: ""} in network mk-default-k8s-different-port-20210816223418-6986: {Iface:virbr2 ExpiryTime:2021-08-16 23:37:39 +0000 UTC Type:0 Mac:52:54:00:ed:d4:05 Iaid: IPaddr:192.168.50.186 Prefix:24 Hostname:default-k8s-different-port-20210816223418-6986 Clientid:01:52:54:00:ed:d4:05}
	I0816 22:43:46.757880   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | domain default-k8s-different-port-20210816223418-6986 has defined IP address 192.168.50.186 and MAC address 52:54:00:ed:d4:05 in network mk-default-k8s-different-port-20210816223418-6986
	I0816 22:43:46.757947   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHPort
	I0816 22:43:46.758111   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHKeyPath
	I0816 22:43:46.758232   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .GetSSHUsername
	I0816 22:43:46.758336   19204 sshutil.go:53] new ssh client: &{IP:192.168.50.186 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/default-k8s-different-port-20210816223418-6986/id_rsa Username:docker}
	I0816 22:43:46.912338   19204 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:43:46.928084   19204 node_ready.go:35] waiting up to 6m0s for node "default-k8s-different-port-20210816223418-6986" to be "Ready" ...
	I0816 22:43:46.928162   19204 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0816 22:43:46.932654   19204 node_ready.go:49] node "default-k8s-different-port-20210816223418-6986" has status "Ready":"True"
	I0816 22:43:46.932677   19204 node_ready.go:38] duration metric: took 4.560299ms waiting for node "default-k8s-different-port-20210816223418-6986" to be "Ready" ...
	I0816 22:43:46.932688   19204 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:46.938801   19204 pod_ready.go:78] waiting up to 6m0s for pod "coredns-558bd4d5db-jvhn9" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:46.959212   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:43:46.959239   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:43:46.980444   19204 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:43:46.992693   19204 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:43:46.992712   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:43:47.139897   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:43:47.140481   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:43:47.283513   19204 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:43:47.283548   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:43:47.307099   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:43:47.307124   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:43:47.337466   19204 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:43:47.337491   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:43:47.400423   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:43:47.400457   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:43:47.428735   19204 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:43:47.473437   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:43:47.473470   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:43:47.809043   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:43:47.809076   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:43:48.151719   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:43:48.151750   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:43:48.433383   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:43:48.433418   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:43:48.581909   19204 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:43:48.581937   19204 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:43:48.707807   19204 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:43:48.976179   19204 pod_ready.go:102] pod "coredns-558bd4d5db-jvhn9" in "kube-system" namespace has status "Ready":"False"
	I0816 22:43:49.433748   19204 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.21.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.505548681s)
	I0816 22:43:49.433787   19204 start.go:728] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS
	I0816 22:43:49.434692   19204 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.522311056s)
	I0816 22:43:49.434732   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:49.434747   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:49.435098   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:49.435119   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:49.435131   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:49.435132   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Closing plugin on server side
	I0816 22:43:49.435143   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:49.435401   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:49.435415   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:49.725705   19204 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.745219751s)
	I0816 22:43:49.725764   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:49.725779   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:49.726085   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:49.726107   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:49.726124   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:49.726137   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:49.726388   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:49.726414   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:49.726427   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:49.726428   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Closing plugin on server side
	I0816 22:43:49.726440   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:49.726685   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Closing plugin on server side
	I0816 22:43:49.726730   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:49.726743   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:50.084265   19204 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.655473327s)
	I0816 22:43:50.084320   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:50.084336   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:50.084638   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:50.084661   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:50.084671   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:50.084682   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:50.084904   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:50.084916   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:50.084935   19204 addons.go:313] Verifying addon metrics-server=true in "default-k8s-different-port-20210816223418-6986"
	I0816 22:43:51.000374   19204 pod_ready.go:92] pod "coredns-558bd4d5db-jvhn9" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:51.000409   19204 pod_ready.go:81] duration metric: took 4.061576094s waiting for pod "coredns-558bd4d5db-jvhn9" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.000426   19204 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.042067   19204 pod_ready.go:92] pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:51.042087   19204 pod_ready.go:81] duration metric: took 41.651304ms waiting for pod "etcd-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.042101   19204 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.076320   19204 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.21.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.368445421s)
	I0816 22:43:51.076371   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:51.076392   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:51.076636   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:51.076655   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:51.076666   19204 main.go:130] libmachine: Making call to close driver server
	I0816 22:43:51.076676   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) Calling .Close
	I0816 22:43:51.076961   19204 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:43:51.076973   19204 main.go:130] libmachine: (default-k8s-different-port-20210816223418-6986) DBG | Closing plugin on server side
	I0816 22:43:51.076983   19204 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:43:51.078741   19204 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0816 22:43:51.078768   19204 addons.go:344] enableAddons completed in 4.413194371s
	I0816 22:43:51.095847   19204 pod_ready.go:92] pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:51.095868   19204 pod_ready.go:81] duration metric: took 53.758678ms waiting for pod "kube-apiserver-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.095885   19204 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.120117   19204 pod_ready.go:92] pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:51.120136   19204 pod_ready.go:81] duration metric: took 24.240957ms waiting for pod "kube-controller-manager-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.120151   19204 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qhsq8" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.137975   19204 pod_ready.go:92] pod "kube-proxy-qhsq8" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:51.138000   19204 pod_ready.go:81] duration metric: took 17.840798ms waiting for pod "kube-proxy-qhsq8" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.138013   19204 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.361490   19204 pod_ready.go:92] pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace has status "Ready":"True"
	I0816 22:43:51.361513   19204 pod_ready.go:81] duration metric: took 223.49089ms waiting for pod "kube-scheduler-default-k8s-different-port-20210816223418-6986" in "kube-system" namespace to be "Ready" ...
	I0816 22:43:51.361522   19204 pod_ready.go:38] duration metric: took 4.428821843s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0816 22:43:51.361535   19204 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:43:51.361593   19204 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:43:51.390742   19204 api_server.go:70] duration metric: took 4.724914292s to wait for apiserver process to appear ...
	I0816 22:43:51.390767   19204 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:43:51.390777   19204 api_server.go:239] Checking apiserver healthz at https://192.168.50.186:8444/healthz ...
	I0816 22:43:51.398481   19204 api_server.go:265] https://192.168.50.186:8444/healthz returned 200:
	ok
	I0816 22:43:51.402341   19204 api_server.go:139] control plane version: v1.21.3
	I0816 22:43:51.402366   19204 api_server.go:129] duration metric: took 11.590514ms to wait for apiserver health ...
	I0816 22:43:51.402376   19204 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:43:51.553058   19204 system_pods.go:59] 8 kube-system pods found
	I0816 22:43:51.553092   19204 system_pods.go:61] "coredns-558bd4d5db-jvhn9" [3c48c2dc-4beb-4359-aadc-1365db48feac] Running
	I0816 22:43:51.553102   19204 system_pods.go:61] "etcd-default-k8s-different-port-20210816223418-6986" [1ec44a23-d678-413f-bc79-1b3b24c77422] Running
	I0816 22:43:51.553109   19204 system_pods.go:61] "kube-apiserver-default-k8s-different-port-20210816223418-6986" [9246fbb2-2bd6-42a5-ad37-66c828343f50] Running
	I0816 22:43:51.553116   19204 system_pods.go:61] "kube-controller-manager-default-k8s-different-port-20210816223418-6986" [974dfb9a-e4b0-4aee-862f-6b0b06f6491e] Running
	I0816 22:43:51.553122   19204 system_pods.go:61] "kube-proxy-qhsq8" [9abb9351-b721-48bb-94b9-887b5afc7584] Running
	I0816 22:43:51.553128   19204 system_pods.go:61] "kube-scheduler-default-k8s-different-port-20210816223418-6986" [b73b2730-6367-4d16-90c7-4ba6ec17f6ef] Running
	I0816 22:43:51.553142   19204 system_pods.go:61] "metrics-server-7c784ccb57-pbxnr" [fa2d27a5-b243-4a8f-9450-b834d1ce5bb0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:51.553155   19204 system_pods.go:61] "storage-provisioner" [a88a523b-5707-46b9-b7cf-6931db0d4487] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:43:51.553166   19204 system_pods.go:74] duration metric: took 150.783692ms to wait for pod list to return data ...
	I0816 22:43:51.553177   19204 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:43:51.749364   19204 default_sa.go:45] found service account: "default"
	I0816 22:43:51.749393   19204 default_sa.go:55] duration metric: took 196.209447ms for default service account to be created ...
	I0816 22:43:51.749405   19204 system_pods.go:116] waiting for k8s-apps to be running ...
	I0816 22:43:51.953876   19204 system_pods.go:86] 8 kube-system pods found
	I0816 22:43:51.953914   19204 system_pods.go:89] "coredns-558bd4d5db-jvhn9" [3c48c2dc-4beb-4359-aadc-1365db48feac] Running
	I0816 22:43:51.953923   19204 system_pods.go:89] "etcd-default-k8s-different-port-20210816223418-6986" [1ec44a23-d678-413f-bc79-1b3b24c77422] Running
	I0816 22:43:51.953931   19204 system_pods.go:89] "kube-apiserver-default-k8s-different-port-20210816223418-6986" [9246fbb2-2bd6-42a5-ad37-66c828343f50] Running
	I0816 22:43:51.953938   19204 system_pods.go:89] "kube-controller-manager-default-k8s-different-port-20210816223418-6986" [974dfb9a-e4b0-4aee-862f-6b0b06f6491e] Running
	I0816 22:43:51.953949   19204 system_pods.go:89] "kube-proxy-qhsq8" [9abb9351-b721-48bb-94b9-887b5afc7584] Running
	I0816 22:43:51.953958   19204 system_pods.go:89] "kube-scheduler-default-k8s-different-port-20210816223418-6986" [b73b2730-6367-4d16-90c7-4ba6ec17f6ef] Running
	I0816 22:43:51.953971   19204 system_pods.go:89] "metrics-server-7c784ccb57-pbxnr" [fa2d27a5-b243-4a8f-9450-b834d1ce5bb0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0816 22:43:51.953985   19204 system_pods.go:89] "storage-provisioner" [a88a523b-5707-46b9-b7cf-6931db0d4487] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:43:51.954000   19204 system_pods.go:126] duration metric: took 204.589729ms to wait for k8s-apps to be running ...
	I0816 22:43:51.954014   19204 system_svc.go:44] waiting for kubelet service to be running ....
	I0816 22:43:51.954066   19204 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:43:51.982620   19204 system_svc.go:56] duration metric: took 28.600519ms WaitForService to wait for kubelet.
	I0816 22:43:51.982645   19204 kubeadm.go:547] duration metric: took 5.316821186s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0816 22:43:51.982666   19204 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:43:52.146042   19204 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:43:52.146082   19204 node_conditions.go:123] node cpu capacity is 2
	I0816 22:43:52.146096   19204 node_conditions.go:105] duration metric: took 163.423737ms to run NodePressure ...
	I0816 22:43:52.146108   19204 start.go:231] waiting for startup goroutines ...
	I0816 22:43:52.193059   19204 start.go:462] kubectl: 1.20.5, cluster: 1.21.3 (minor skew: 1)
	I0816 22:43:52.195545   19204 out.go:177] * Done! kubectl is now configured to use "default-k8s-different-port-20210816223418-6986" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID
	cf018e1aa6e20       523cad1a4df73       22 seconds ago      Exited              dashboard-metrics-scraper   1                   94f02cd419011
	dfd15db9428f2       6e38f40d628db       28 seconds ago      Exited              storage-provisioner         0                   cd0f8b9910c05
	091f1b89bc6ac       9a07b5b4bfac0       29 seconds ago      Running             kubernetes-dashboard        0                   35bd425379895
	def14af399b57       296a6d5035e2d       33 seconds ago      Running             coredns                     0                   e49dbc663b365
	59465014a8ff9       adb2816ea823a       35 seconds ago      Running             kube-proxy                  0                   ba0dd77bce835
	57f34c516cefd       0369cf4303ffd       58 seconds ago      Running             etcd                        0                   1edc418e603d1
	91c23817fb7ab       6be0dc1302e30       58 seconds ago      Running             kube-scheduler              0                   232baf5e6e818
	b98cd396fdb12       bc2bb319a7038       58 seconds ago      Running             kube-controller-manager     0                   0bbbf0f687140
	ee6e3ad90fb4e       3d174f00aa39e       58 seconds ago      Running             kube-apiserver              0                   3a8ee57b53e2e
	
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2021-08-16 22:37:38 UTC, end at Mon 2021-08-16 22:44:21 UTC. --
	Aug 16 22:43:58 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:58.902145331Z" level=info msg="StartContainer for \"754475885127981608f71b96bb2e3ba5252c57ef4af4a244323d808ff263c78f\" returns successfully"
	Aug 16 22:43:58 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:58.931065548Z" level=info msg="Finish piping stderr of container \"754475885127981608f71b96bb2e3ba5252c57ef4af4a244323d808ff263c78f\""
	Aug 16 22:43:58 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:58.931249585Z" level=info msg="Finish piping stdout of container \"754475885127981608f71b96bb2e3ba5252c57ef4af4a244323d808ff263c78f\""
	Aug 16 22:43:58 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:58.933125823Z" level=info msg="TaskExit event &TaskExit{ContainerID:754475885127981608f71b96bb2e3ba5252c57ef4af4a244323d808ff263c78f,ID:754475885127981608f71b96bb2e3ba5252c57ef4af4a244323d808ff263c78f,Pid:6754,ExitStatus:1,ExitedAt:2021-08-16 22:43:58.932660606 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:59.002540546Z" level=info msg="shim disconnected" id=754475885127981608f71b96bb2e3ba5252c57ef4af4a244323d808ff263c78f
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:59.003217757Z" level=error msg="copy shim log" error="read /proc/self/fd/131: file already closed"
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:59.310844480Z" level=info msg="CreateContainer within sandbox \"94f02cd4190110c6a14cb72203c80ac478f0f42f6d9c58cdd09fafa3f66255e6\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:59.374209197Z" level=info msg="CreateContainer within sandbox \"94f02cd4190110c6a14cb72203c80ac478f0f42f6d9c58cdd09fafa3f66255e6\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363\""
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:59.376769567Z" level=info msg="StartContainer for \"cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363\""
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:59.778798857Z" level=info msg="StartContainer for \"cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363\" returns successfully"
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:59.804692191Z" level=info msg="Finish piping stderr of container \"cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363\""
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:59.804929346Z" level=info msg="Finish piping stdout of container \"cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363\""
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:59.806705559Z" level=info msg="TaskExit event &TaskExit{ContainerID:cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363,ID:cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363,Pid:6823,ExitStatus:1,ExitedAt:2021-08-16 22:43:59.806074276 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:59.872972095Z" level=info msg="shim disconnected" id=cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:59.873192054Z" level=error msg="copy shim log" error="read /proc/self/fd/131: file already closed"
	Aug 16 22:44:00 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:44:00.326471350Z" level=info msg="RemoveContainer for \"754475885127981608f71b96bb2e3ba5252c57ef4af4a244323d808ff263c78f\""
	Aug 16 22:44:00 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:44:00.351828125Z" level=info msg="RemoveContainer for \"754475885127981608f71b96bb2e3ba5252c57ef4af4a244323d808ff263c78f\" returns successfully"
	Aug 16 22:44:04 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:44:04.053397937Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:44:04 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:44:04.058967013Z" level=info msg="trying next host" error="failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" host=fake.domain
	Aug 16 22:44:04 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:44:04.061279204Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Aug 16 22:44:17 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:44:17.257567766Z" level=info msg="Finish piping stdout of container \"dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79\""
	Aug 16 22:44:17 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:44:17.257765243Z" level=info msg="Finish piping stderr of container \"dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79\""
	Aug 16 22:44:17 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:44:17.261754719Z" level=info msg="TaskExit event &TaskExit{ContainerID:dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79,ID:dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79,Pid:6676,ExitStatus:255,ExitedAt:2021-08-16 22:44:17.260803906 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:44:17 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:44:17.320806830Z" level=info msg="shim disconnected" id=dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79
	Aug 16 22:44:17 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:44:17.320947591Z" level=error msg="copy shim log" error="read /proc/self/fd/118: file already closed"
	
	* 
	* ==> coredns [def14af399b574c5412b6c393d65b7d0b6ae9f71d7709f25fe7a7956acf00e0c] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +5.482425] systemd-fstab-generator[1162]: Ignoring "noauto" for root device
	[  +0.037511] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.083932] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1735 comm=systemd-network
	[  +0.885849] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.863051] vboxguest: loading out-of-tree module taints kernel.
	[  +0.006670] vboxguest: PCI device not found, probably running on physical hardware.
	[Aug16 22:38] systemd-fstab-generator[2068]: Ignoring "noauto" for root device
	[  +0.425362] systemd-fstab-generator[2101]: Ignoring "noauto" for root device
	[  +0.138611] systemd-fstab-generator[2114]: Ignoring "noauto" for root device
	[  +0.203668] systemd-fstab-generator[2144]: Ignoring "noauto" for root device
	[  +8.552234] systemd-fstab-generator[2339]: Ignoring "noauto" for root device
	[ +31.092805] kauditd_printk_skb: 38 callbacks suppressed
	[ +12.980841] kauditd_printk_skb: 83 callbacks suppressed
	[Aug16 22:39] kauditd_printk_skb: 98 callbacks suppressed
	[ +10.668181] kauditd_printk_skb: 5 callbacks suppressed
	[ +20.315098] NFSD: Unable to end grace period: -110
	[Aug16 22:43] systemd-fstab-generator[5208]: Ignoring "noauto" for root device
	[ +16.424678] systemd-fstab-generator[5623]: Ignoring "noauto" for root device
	[ +14.790691] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.059674] kauditd_printk_skb: 68 callbacks suppressed
	[  +7.608074] kauditd_printk_skb: 14 callbacks suppressed
	[Aug16 22:44] systemd-fstab-generator[6871]: Ignoring "noauto" for root device
	[  +0.732611] systemd-fstab-generator[6927]: Ignoring "noauto" for root device
	[  +1.062790] systemd-fstab-generator[6982]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8] <==
	* 2021-08-16 22:43:23.891985 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-16 22:43:23.905826 I | etcdserver: e5596a975f8061c0 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/16 22:43:23 INFO: e5596a975f8061c0 switched to configuration voters=(16526357505987600832)
	2021-08-16 22:43:23.910102 I | etcdserver/membership: added member e5596a975f8061c0 [https://192.168.50.186:2380] to cluster e001ea9e448e2c
	2021-08-16 22:43:23.918571 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-16 22:43:23.918814 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-16 22:43:23.918905 I | embed: listening for peers on 192.168.50.186:2380
	raft2021/08/16 22:43:24 INFO: e5596a975f8061c0 is starting a new election at term 1
	raft2021/08/16 22:43:24 INFO: e5596a975f8061c0 became candidate at term 2
	raft2021/08/16 22:43:24 INFO: e5596a975f8061c0 received MsgVoteResp from e5596a975f8061c0 at term 2
	raft2021/08/16 22:43:24 INFO: e5596a975f8061c0 became leader at term 2
	raft2021/08/16 22:43:24 INFO: raft.node: e5596a975f8061c0 elected leader e5596a975f8061c0 at term 2
	2021-08-16 22:43:24.261964 I | etcdserver: published {Name:default-k8s-different-port-20210816223418-6986 ClientURLs:[https://192.168.50.186:2379]} to cluster e001ea9e448e2c
	2021-08-16 22:43:24.263575 I | embed: ready to serve client requests
	2021-08-16 22:43:24.272425 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-16 22:43:24.273101 I | embed: ready to serve client requests
	2021-08-16 22:43:24.282779 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-16 22:43:24.285054 I | embed: serving client requests on 192.168.50.186:2379
	2021-08-16 22:43:24.285291 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-16 22:43:24.285423 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-16 22:43:40.337207 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:43:41.515116 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:43:48.547793 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/coredns\" " with result "range_response_count:1 size:612" took too long (144.579592ms) to execute
	2021-08-16 22:43:51.515451 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:44:01.515842 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  22:44:32 up 7 min,  0 users,  load average: 1.69, 0.96, 0.43
	Linux default-k8s-different-port-20210816223418-6986 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [ee6e3ad90fb4e19819658b68382724e6a5cf8fe284be81a4b38efc7636b1dde1] <==
	* E0816 22:44:17.236798       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0816 22:44:17.237004       1 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
	E0816 22:44:17.238652       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0816 22:44:17.240273       1 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
	I0816 22:44:17.241583       1 trace.go:205] Trace[492557165]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.50.186,accept:application/json, */*,protocol:HTTP/2.0 (16-Aug-2021 22:44:07.245) (total time: 9996ms):
	Trace[492557165]: [9.996047662s] [9.996047662s] END
	I0816 22:44:25.478660       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:25.498656       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:25.581705       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:29.718642       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:29.719307       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:29.966933       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:30.005138       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:30.005147       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:30.100777       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:30.118916       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:30.290735       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:30.291510       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:30.375176       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:30.375188       1 clientconn.go:897] blockingPicker: the picked transport is not ready, loop back to repick
	I0816 22:44:32.151259       1 trace.go:205] Trace[1453126551]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:44:21.759) (total time: 10391ms):
	Trace[1453126551]: [10.391866664s] [10.391866664s] END
	E0816 22:44:32.151545       1 status.go:71] apiserver received an error that is not an metav1.Status: &status.statusError{state:impl.MessageState{NoUnkeyedLiterals:pragma.NoUnkeyedLiterals{}, DoNotCompare:pragma.DoNotCompare{}, DoNotCopy:pragma.DoNotCopy{}, atomicMessageInfo:(*impl.MessageInfo)(nil)}, sizeCache:0, unknownFields:[]uint8(nil), Code:14, Message:"transport is closing", Details:[]*anypb.Any(nil)}: rpc error: code = Unavailable desc = transport is closing
	I0816 22:44:32.152847       1 trace.go:205] Trace[507395781]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (16-Aug-2021 22:44:21.759) (total time: 10393ms):
	Trace[507395781]: [10.393635845s] [10.393635845s] END
	
	* 
	* ==> kube-controller-manager [b98cd396fdb126a029db11a0dc2fba55c059c5e1325b67b4a634d563af29288b] <==
	* E0816 22:43:50.226026       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:50.226439       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:43:50.230076       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6fcdf4f6d to 1"
	E0816 22:43:50.262585       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:50.263923       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:43:50.264865       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:43:50.283638       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:43:50.293231       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:50.312112       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:43:50.322851       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:50.323848       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:43:50.333753       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:43:50.334111       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:50.412580       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:43:50.412653       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:50.414212       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:43:50.417056       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:43:50.452424       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:43:50.452951       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:50.453019       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:43:50.453035       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:43:50.521534       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-7gstk"
	I0816 22:43:50.635016       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-nctsf"
	E0816 22:44:15.468466       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:44:15.903027       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5] <==
	* I0816 22:43:46.550878       1 node.go:172] Successfully retrieved node IP: 192.168.50.186
	I0816 22:43:46.550950       1 server_others.go:140] Detected node IP 192.168.50.186
	W0816 22:43:46.551113       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0816 22:43:46.603198       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0816 22:43:46.604257       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0816 22:43:46.604284       1 server_others.go:212] Using iptables Proxier.
	I0816 22:43:46.604671       1 server.go:643] Version: v1.21.3
	I0816 22:43:46.606945       1 config.go:224] Starting endpoint slice config controller
	I0816 22:43:46.608974       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0816 22:43:46.608754       1 config.go:315] Starting service config controller
	I0816 22:43:46.610177       1 shared_informer.go:240] Waiting for caches to sync for service config
	W0816 22:43:46.628591       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 22:43:46.631027       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:43:46.710924       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 22:43:46.711125       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [91c23817fb7abda2b7c16241219d51e4ed678c5cb58792aa6ddbebafd8c8e1d7] <==
	* I0816 22:43:28.847982       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0816 22:43:28.857263       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:43:28.871199       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:43:28.875806       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:43:28.876635       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:43:28.879652       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:43:28.880282       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:43:28.880793       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:43:28.880851       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:43:28.880928       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:43:28.880986       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:43:28.881036       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:43:28.881082       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:43:28.881093       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:43:28.881264       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:43:29.699413       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:43:29.802196       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:43:29.809598       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:43:29.912489       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:43:30.006080       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:43:30.009886       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:43:30.027658       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:43:30.177599       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:43:30.333038       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0816 22:43:32.148167       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:37:38 UTC, end at Mon 2021-08-16 22:44:32 UTC. --
	Aug 16 22:43:50 default-k8s-different-port-20210816223418-6986 kubelet[5632]: I0816 22:43:50.688008    5632 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/806d3966-d956-400b-b825-eb1393026138-tmp-volume\") pod \"kubernetes-dashboard-6fcdf4f6d-7gstk\" (UID: \"806d3966-d956-400b-b825-eb1393026138\") "
	Aug 16 22:43:50 default-k8s-different-port-20210816223418-6986 kubelet[5632]: I0816 22:43:50.744073    5632 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:43:50 default-k8s-different-port-20210816223418-6986 kubelet[5632]: I0816 22:43:50.790433    5632 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-984n5\" (UniqueName: \"kubernetes.io/projected/3023a30f-e167-4d2d-9cb7-5f01b3a89700-kube-api-access-984n5\") pod \"dashboard-metrics-scraper-8685c45546-nctsf\" (UID: \"3023a30f-e167-4d2d-9cb7-5f01b3a89700\") "
	Aug 16 22:43:50 default-k8s-different-port-20210816223418-6986 kubelet[5632]: I0816 22:43:50.795038    5632 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3023a30f-e167-4d2d-9cb7-5f01b3a89700-tmp-volume\") pod \"dashboard-metrics-scraper-8685c45546-nctsf\" (UID: \"3023a30f-e167-4d2d-9cb7-5f01b3a89700\") "
	Aug 16 22:43:51 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:43:51.823508    5632 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:43:51 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:43:51.823559    5632 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:43:51 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:43:51.823847    5632 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5w76r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{
Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,Vol
umeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-pbxnr_kube-system(fa2d27a5-b243-4a8f-9450-b834d1ce5bb0): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:43:51 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:43:51.823907    5632 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-pbxnr" podUID=fa2d27a5-b243-4a8f-9450-b834d1ce5bb0
	Aug 16 22:43:52 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:43:52.193302    5632 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-pbxnr" podUID=fa2d27a5-b243-4a8f-9450-b834d1ce5bb0
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 kubelet[5632]: I0816 22:43:59.293048    5632 scope.go:111] "RemoveContainer" containerID="754475885127981608f71b96bb2e3ba5252c57ef4af4a244323d808ff263c78f"
	Aug 16 22:44:00 default-k8s-different-port-20210816223418-6986 kubelet[5632]: I0816 22:44:00.308642    5632 scope.go:111] "RemoveContainer" containerID="754475885127981608f71b96bb2e3ba5252c57ef4af4a244323d808ff263c78f"
	Aug 16 22:44:00 default-k8s-different-port-20210816223418-6986 kubelet[5632]: I0816 22:44:00.310201    5632 scope.go:111] "RemoveContainer" containerID="cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363"
	Aug 16 22:44:00 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:44:00.311651    5632 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-nctsf_kubernetes-dashboard(3023a30f-e167-4d2d-9cb7-5f01b3a89700)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-nctsf" podUID=3023a30f-e167-4d2d-9cb7-5f01b3a89700
	Aug 16 22:44:01 default-k8s-different-port-20210816223418-6986 kubelet[5632]: I0816 22:44:01.316243    5632 scope.go:111] "RemoveContainer" containerID="cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363"
	Aug 16 22:44:01 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:44:01.317822    5632 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-nctsf_kubernetes-dashboard(3023a30f-e167-4d2d-9cb7-5f01b3a89700)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-nctsf" podUID=3023a30f-e167-4d2d-9cb7-5f01b3a89700
	Aug 16 22:44:02 default-k8s-different-port-20210816223418-6986 kubelet[5632]: I0816 22:44:02.319878    5632 scope.go:111] "RemoveContainer" containerID="cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363"
	Aug 16 22:44:02 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:44:02.321612    5632 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-nctsf_kubernetes-dashboard(3023a30f-e167-4d2d-9cb7-5f01b3a89700)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-nctsf" podUID=3023a30f-e167-4d2d-9cb7-5f01b3a89700
	Aug 16 22:44:04 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:44:04.063102    5632 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:44:04 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:44:04.063318    5632 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:44:04 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:44:04.067426    5632 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5w76r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{
Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,Vol
umeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-pbxnr_kube-system(fa2d27a5-b243-4a8f-9450-b834d1ce5bb0): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:44:04 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:44:04.067833    5632 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-pbxnr" podUID=fa2d27a5-b243-4a8f-9450-b834d1ce5bb0
	Aug 16 22:44:04 default-k8s-different-port-20210816223418-6986 kubelet[5632]: I0816 22:44:04.389451    5632 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 16 22:44:04 default-k8s-different-port-20210816223418-6986 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:44:04 default-k8s-different-port-20210816223418-6986 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:44:04 default-k8s-different-port-20210816223418-6986 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [091f1b89bc6ac929a99a8c2d63bb67a16934b4902698c9995a38073d0dda1f1f] <==
	* 2021/08/16 22:43:52 Starting overwatch
	2021/08/16 22:43:52 Using namespace: kubernetes-dashboard
	2021/08/16 22:43:52 Using in-cluster config to connect to apiserver
	2021/08/16 22:43:53 Using secret token for csrf signing
	2021/08/16 22:43:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/16 22:43:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/16 22:43:53 Successful initial request to the apiserver, version: v1.21.3
	2021/08/16 22:43:53 Generating JWE encryption key
	2021/08/16 22:43:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/16 22:43:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/16 22:43:53 Initializing JWE encryption key from synchronized object
	2021/08/16 22:43:53 Creating in-cluster Sidecar client
	2021/08/16 22:43:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:43:53 Serving insecurely on HTTP port: 9090
	
	* 
	* ==> storage-provisioner [dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 78 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc000110110, 0x0)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc000110100)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc000100300, 0x0, 0x0, 0x404020302030300)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc00040ea00, 0x18e5530, 0xc000110240, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00038ce20)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00038ce20, 0x18b3d60, 0xc00039a420, 0x1, 0xc00014a5a0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00038ce20, 0x3b9aca00, 0x0, 0x1, 0xc00014a5a0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc00038ce20, 0x3b9aca00, 0xc00014a5a0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:44:32.158780   20500 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Error from server: rpc error: code = Unavailable desc = transport is closing
	 output: "\n** stderr ** \nError from server: rpc error: code = Unavailable desc = transport is closing\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210816223418-6986 -n default-k8s-different-port-20210816223418-6986

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210816223418-6986 -n default-k8s-different-port-20210816223418-6986: exit status 2 (16.206660683s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:44:48.990107   20871 status.go:422] Error apiserver status: https://192.168.50.186:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[-]etcd failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	

                                                
                                                
** /stderr **
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/default-k8s-different-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/default-k8s-different-port/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-different-port-20210816223418-6986 logs -n 25
E0816 22:45:04.744589    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816223156-6986/client.crt: no such file or directory
E0816 22:45:04.749913    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816223156-6986/client.crt: no such file or directory
E0816 22:45:04.760196    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816223156-6986/client.crt: no such file or directory
E0816 22:45:04.780549    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816223156-6986/client.crt: no such file or directory
E0816 22:45:04.820814    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816223156-6986/client.crt: no such file or directory
E0816 22:45:04.901143    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816223156-6986/client.crt: no such file or directory
E0816 22:45:05.061568    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816223156-6986/client.crt: no such file or directory
E0816 22:45:05.382038    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816223156-6986/client.crt: no such file or directory
E0816 22:45:06.022713    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816223156-6986/client.crt: no such file or directory
E0816 22:45:07.303544    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816223156-6986/client.crt: no such file or directory
E0816 22:45:09.864133    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816223156-6986/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Pause
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p default-k8s-different-port-20210816223418-6986 logs -n 25: exit status 110 (1m1.106184543s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                       Args                        |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable metrics-server -p                          | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:44 UTC | Mon, 16 Aug 2021 22:34:45 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816223156-6986                 | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:31:56 UTC | Mon, 16 Aug 2021 22:35:04 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:33:33 UTC | Mon, 16 Aug 2021 22:35:08 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:16 UTC | Mon, 16 Aug 2021 22:35:17 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:19 UTC | Mon, 16 Aug 2021 22:35:20 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:18 UTC | Mon, 16 Aug 2021 22:35:42 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                          | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:51 UTC | Mon, 16 Aug 2021 22:35:52 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain            |                                                |         |         |                               |                               |
	| stop    | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:34:45 UTC | Mon, 16 Aug 2021 22:36:18 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:18 UTC | Mon, 16 Aug 2021 22:36:18 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| stop    | -p                                                | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:17 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| stop    | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:20 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| stop    | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:52 UTC | Mon, 16 Aug 2021 22:37:25 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                            |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                               | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:37:25 UTC | Mon, 16 Aug 2021 22:37:25 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816223156-6986                 | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:42:31 UTC |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                       |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                 |                                                |         |         |                               |                               |
	| ssh     | -p                                                | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:42:42 UTC | Mon, 16 Aug 2021 22:42:42 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| start   | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:18 UTC | Mon, 16 Aug 2021 22:43:37 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                 |                                                |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                     |                                                |         |         |                               |                               |
	|         | --disable-driver-mounts                           |                                                |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                      |                                                |         |         |                               |                               |
	| start   | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:43:44 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                   |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                         |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                     |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                    |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| ssh     | -p                                                | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:43:47 UTC | Mon, 16 Aug 2021 22:43:47 UTC |
	|         | old-k8s-version-20210816223154-6986               |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| start   | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:37:25 UTC | Mon, 16 Aug 2021 22:43:52 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true       |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2               |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                   |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                      |                                                |         |         |                               |                               |
	| ssh     | -p                                                | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:43:55 UTC | Mon, 16 Aug 2021 22:43:55 UTC |
	|         | embed-certs-20210816223333-6986                   |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| ssh     | -p                                                | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:44:03 UTC | Mon, 16 Aug 2021 22:44:03 UTC |
	|         | default-k8s-different-port-20210816223418-6986    |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                        |                                                |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:44:30 UTC | Mon, 16 Aug 2021 22:44:30 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	| delete  | -p                                                | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:44:31 UTC | Mon, 16 Aug 2021 22:44:31 UTC |
	|         | no-preload-20210816223156-6986                    |                                                |         |         |                               |                               |
	|---------|---------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:44:31
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:44:31.336463   20709 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:44:31.336533   20709 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:44:31.336537   20709 out.go:311] Setting ErrFile to fd 2...
	I0816 22:44:31.336542   20709 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:44:31.336660   20709 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:44:31.336912   20709 out.go:305] Setting JSON to false
	I0816 22:44:31.372871   20709 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":5233,"bootTime":1629148638,"procs":183,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:44:31.372979   20709 start.go:121] virtualization: kvm guest
	I0816 22:44:31.375339   20709 out.go:177] * [newest-cni-20210816224431-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:44:31.376976   20709 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:44:31.375475   20709 notify.go:169] Checking for updates...
	I0816 22:44:31.378360   20709 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:44:31.379751   20709 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:44:31.381087   20709 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:44:31.381541   20709 config.go:177] Loaded profile config "default-k8s-different-port-20210816223418-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:44:31.381666   20709 config.go:177] Loaded profile config "embed-certs-20210816223333-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:44:31.381762   20709 config.go:177] Loaded profile config "old-k8s-version-20210816223154-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	I0816 22:44:31.381800   20709 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:44:31.415103   20709 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 22:44:31.415129   20709 start.go:278] selected driver: kvm2
	I0816 22:44:31.415136   20709 start.go:751] validating driver "kvm2" against <nil>
	I0816 22:44:31.415156   20709 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 22:44:31.417123   20709 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:44:31.417269   20709 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 22:44:31.428378   20709 install.go:137] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2 version is 1.22.0
	I0816 22:44:31.428425   20709 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	W0816 22:44:31.428448   20709 out.go:242] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0816 22:44:31.428583   20709 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 22:44:31.428608   20709 cni.go:93] Creating CNI manager for ""
	I0816 22:44:31.428616   20709 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:44:31.428624   20709 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 22:44:31.428632   20709 start_flags.go:277] config:
	{Name:newest-cni-20210816224431-6986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816224431-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:44:31.428727   20709 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:44:31.430575   20709 out.go:177] * Starting control plane node newest-cni-20210816224431-6986 in cluster newest-cni-20210816224431-6986
	I0816 22:44:31.430597   20709 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 22:44:31.430639   20709 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0816 22:44:31.430657   20709 cache.go:56] Caching tarball of preloaded images
	I0816 22:44:31.430757   20709 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 22:44:31.430778   20709 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on containerd
	I0816 22:44:31.430895   20709 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/config.json ...
	I0816 22:44:31.430918   20709 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/config.json: {Name:mkc9663018589074668a46d91251fc73622d0917 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:44:31.431076   20709 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:44:31.431108   20709 start.go:313] acquiring machines lock for newest-cni-20210816224431-6986: {Name:mk808edd60d1305a42bb85791729eff4573dbb15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 22:44:31.431156   20709 start.go:317] acquired machines lock for "newest-cni-20210816224431-6986" in 32.129µs
	I0816 22:44:31.431179   20709 start.go:89] Provisioning new machine with config: &{Name:newest-cni-20210816224431-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816224431-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0} &{Name: IP: Port:8443 Kubernetes
Version:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0816 22:44:31.431268   20709 start.go:126] createHost starting for "" (driver="kvm2")
	I0816 22:44:31.433330   20709 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0816 22:44:31.433460   20709 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:44:31.433512   20709 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:44:31.443654   20709 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46495
	I0816 22:44:31.444063   20709 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:44:31.444565   20709 main.go:130] libmachine: Using API Version  1
	I0816 22:44:31.444586   20709 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:44:31.444925   20709 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:44:31.445103   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetMachineName
	I0816 22:44:31.445239   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:44:31.445356   20709 start.go:160] libmachine.API.Create for "newest-cni-20210816224431-6986" (driver="kvm2")
	I0816 22:44:31.445394   20709 client.go:168] LocalClient.Create starting
	I0816 22:44:31.445428   20709 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem
	I0816 22:44:31.445462   20709 main.go:130] libmachine: Decoding PEM data...
	I0816 22:44:31.445480   20709 main.go:130] libmachine: Parsing certificate...
	I0816 22:44:31.445628   20709 main.go:130] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem
	I0816 22:44:31.445657   20709 main.go:130] libmachine: Decoding PEM data...
	I0816 22:44:31.445682   20709 main.go:130] libmachine: Parsing certificate...
	I0816 22:44:31.445747   20709 main.go:130] libmachine: Running pre-create checks...
	I0816 22:44:31.445765   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .PreCreateCheck
	I0816 22:44:31.446091   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetConfigRaw
	I0816 22:44:31.446492   20709 main.go:130] libmachine: Creating machine...
	I0816 22:44:31.446507   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Create
	I0816 22:44:31.446664   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Creating KVM machine...
	I0816 22:44:31.449397   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found existing default KVM network
	I0816 22:44:31.450933   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.450787   20733 network.go:240] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b4:06:d9}}
	I0816 22:44:31.452400   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.452341   20733 network.go:240] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:f8:b7:da}}
	I0816 22:44:31.453464   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.453369   20733 network.go:240] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:ee:53:1e}}
	I0816 22:44:31.454534   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.454466   20733 network.go:240] skipping subnet 192.168.72.0/24 that is taken: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 Interface:{IfaceName:virbr4 IfaceIPv4:192.168.72.1 IfaceMTU:1500 IfaceMAC:52:54:00:45:d3:67}}
	I0816 22:44:31.456524   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.456452   20733 network.go:240] skipping subnet 192.168.83.0/24 that is taken: &{IP:192.168.83.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.83.0/24 Gateway:192.168.83.1 ClientMin:192.168.83.2 ClientMax:192.168.83.254 Broadcast:192.168.83.255 Interface:{IfaceName:virbr5 IfaceIPv4:192.168.83.1 IfaceMTU:1500 IfaceMAC:52:54:00:ea:76:4e}}
	I0816 22:44:31.457759   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.457675   20733 network.go:240] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 Interface:{IfaceName:virbr6 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:52:54:00:6c:86:bd}}
	I0816 22:44:31.458795   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.458728   20733 network.go:240] skipping subnet 192.168.105.0/24 that is taken: &{IP:192.168.105.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.105.0/24 Gateway:192.168.105.1 ClientMin:192.168.105.2 ClientMax:192.168.105.254 Broadcast:192.168.105.255 Interface:{IfaceName:virbr7 IfaceIPv4:192.168.105.1 IfaceMTU:1500 IfaceMAC:52:54:00:ea:b2:03}}
	I0816 22:44:31.460187   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.460086   20733 network.go:288] reserving subnet 192.168.116.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.116.0:0xc0000be0b8] misses:0}
	I0816 22:44:31.460215   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.460127   20733 network.go:235] using free private subnet 192.168.116.0/24: &{IP:192.168.116.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.116.0/24 Gateway:192.168.116.1 ClientMin:192.168.116.2 ClientMax:192.168.116.254 Broadcast:192.168.116.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0816 22:44:31.497376   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | trying to create private KVM network mk-newest-cni-20210816224431-6986 192.168.116.0/24...
	I0816 22:44:31.783525   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | private KVM network mk-newest-cni-20210816224431-6986 192.168.116.0/24 created
	I0816 22:44:31.783566   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Setting up store path in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986 ...
	I0816 22:44:31.783588   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.783470   20733 common.go:101] Making disk image using store path: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:44:31.783620   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Building disk image from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0816 22:44:31.783744   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Downloading /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso...
	I0816 22:44:31.986209   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:31.986090   20733 common.go:108] Creating ssh key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa...
	I0816 22:44:32.210064   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:32.209964   20733 common.go:114] Creating raw disk image: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/newest-cni-20210816224431-6986.rawdisk...
	I0816 22:44:32.210106   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Writing magic tar header
	I0816 22:44:32.210184   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Writing SSH key tar header
	I0816 22:44:32.210290   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:32.210235   20733 common.go:128] Fixing permissions on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986 ...
	I0816 22:44:32.210373   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986
	I0816 22:44:32.210394   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines
	I0816 22:44:32.210410   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986 (perms=drwx------)
	I0816 22:44:32.210437   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:44:32.210461   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a
	I0816 22:44:32.210482   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines (perms=drwxr-xr-x)
	I0816 22:44:32.210497   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0816 22:44:32.210520   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube (perms=drwxr-xr-x)
	I0816 22:44:32.210544   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Setting executable bit set on /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a (perms=drwxr-xr-x)
	I0816 22:44:32.210559   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxr-xr-x)
	I0816 22:44:32.210573   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Checking permissions on dir: /home/jenkins
	I0816 22:44:32.210588   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Checking permissions on dir: /home
	I0816 22:44:32.210601   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Skipping /home - not owner
	I0816 22:44:32.210622   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0816 22:44:32.210643   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Creating domain...
	I0816 22:44:32.236885   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:cb:20:33 in network default
	I0816 22:44:32.237605   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Ensuring networks are active...
	I0816 22:44:32.237633   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:32.239810   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Ensuring network default is active
	I0816 22:44:32.240283   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Ensuring network mk-newest-cni-20210816224431-6986 is active
	I0816 22:44:32.240922   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Getting domain xml...
	I0816 22:44:32.242965   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Creating domain...
	I0816 22:44:32.738898   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) Waiting to get IP...
	I0816 22:44:32.739904   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:32.740448   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:32.740502   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:32.740433   20733 retry.go:31] will retry after 263.082536ms: waiting for machine to come up
	I0816 22:44:33.004929   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:33.005411   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:33.005575   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:33.005505   20733 retry.go:31] will retry after 381.329545ms: waiting for machine to come up
	I0816 22:44:33.387930   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:33.388329   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:33.388355   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:33.388297   20733 retry.go:31] will retry after 422.765636ms: waiting for machine to come up
	I0816 22:44:33.812967   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:33.813440   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:33.813476   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:33.813379   20733 retry.go:31] will retry after 473.074753ms: waiting for machine to come up
	I0816 22:44:34.287851   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:34.288312   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:34.288339   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:34.288266   20733 retry.go:31] will retry after 587.352751ms: waiting for machine to come up
	I0816 22:44:34.876901   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:34.877366   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:34.877411   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:34.877323   20733 retry.go:31] will retry after 834.206799ms: waiting for machine to come up
	I0816 22:44:35.713609   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:35.714113   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:35.714144   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:35.714065   20733 retry.go:31] will retry after 746.553905ms: waiting for machine to come up
	I0816 22:44:36.462453   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:36.463031   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:36.463062   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:36.462902   20733 retry.go:31] will retry after 987.362415ms: waiting for machine to come up
	I0816 22:44:37.451848   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:37.452318   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:37.452342   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:37.452285   20733 retry.go:31] will retry after 1.189835008s: waiting for machine to come up
	I0816 22:44:38.643196   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:38.643649   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:38.643677   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:38.643613   20733 retry.go:31] will retry after 1.677229867s: waiting for machine to come up
	I0816 22:44:40.323071   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:40.323607   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:40.323641   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:40.323541   20733 retry.go:31] will retry after 2.346016261s: waiting for machine to come up
	I0816 22:44:42.671108   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:42.671597   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:42.671621   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:42.671578   20733 retry.go:31] will retry after 3.36678925s: waiting for machine to come up
	I0816 22:44:46.039560   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:44:46.040060   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | unable to find current IP address of domain newest-cni-20210816224431-6986 in network mk-newest-cni-20210816224431-6986
	I0816 22:44:46.040083   20709 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | I0816 22:44:46.040019   20733 retry.go:31] will retry after 3.11822781s: waiting for machine to come up
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	cf018e1aa6e20       523cad1a4df73       50 seconds ago       Exited              dashboard-metrics-scraper   1                   94f02cd419011
	dfd15db9428f2       6e38f40d628db       56 seconds ago       Exited              storage-provisioner         0                   cd0f8b9910c05
	091f1b89bc6ac       9a07b5b4bfac0       57 seconds ago       Running             kubernetes-dashboard        0                   35bd425379895
	def14af399b57       296a6d5035e2d       About a minute ago   Running             coredns                     0                   e49dbc663b365
	59465014a8ff9       adb2816ea823a       About a minute ago   Running             kube-proxy                  0                   ba0dd77bce835
	57f34c516cefd       0369cf4303ffd       About a minute ago   Running             etcd                        0                   1edc418e603d1
	91c23817fb7ab       6be0dc1302e30       About a minute ago   Running             kube-scheduler              0                   232baf5e6e818
	b98cd396fdb12       bc2bb319a7038       About a minute ago   Running             kube-controller-manager     0                   0bbbf0f687140
	ee6e3ad90fb4e       3d174f00aa39e       About a minute ago   Running             kube-apiserver              0                   3a8ee57b53e2e
	
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2021-08-16 22:37:38 UTC, end at Mon 2021-08-16 22:44:49 UTC. --
	Aug 16 22:43:58 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:58.902145331Z" level=info msg="StartContainer for \"754475885127981608f71b96bb2e3ba5252c57ef4af4a244323d808ff263c78f\" returns successfully"
	Aug 16 22:43:58 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:58.931065548Z" level=info msg="Finish piping stderr of container \"754475885127981608f71b96bb2e3ba5252c57ef4af4a244323d808ff263c78f\""
	Aug 16 22:43:58 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:58.931249585Z" level=info msg="Finish piping stdout of container \"754475885127981608f71b96bb2e3ba5252c57ef4af4a244323d808ff263c78f\""
	Aug 16 22:43:58 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:58.933125823Z" level=info msg="TaskExit event &TaskExit{ContainerID:754475885127981608f71b96bb2e3ba5252c57ef4af4a244323d808ff263c78f,ID:754475885127981608f71b96bb2e3ba5252c57ef4af4a244323d808ff263c78f,Pid:6754,ExitStatus:1,ExitedAt:2021-08-16 22:43:58.932660606 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:59.002540546Z" level=info msg="shim disconnected" id=754475885127981608f71b96bb2e3ba5252c57ef4af4a244323d808ff263c78f
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:59.003217757Z" level=error msg="copy shim log" error="read /proc/self/fd/131: file already closed"
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:59.310844480Z" level=info msg="CreateContainer within sandbox \"94f02cd4190110c6a14cb72203c80ac478f0f42f6d9c58cdd09fafa3f66255e6\" for container &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,}"
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:59.374209197Z" level=info msg="CreateContainer within sandbox \"94f02cd4190110c6a14cb72203c80ac478f0f42f6d9c58cdd09fafa3f66255e6\" for &ContainerMetadata{Name:dashboard-metrics-scraper,Attempt:1,} returns container id \"cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363\""
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:59.376769567Z" level=info msg="StartContainer for \"cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363\""
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:59.778798857Z" level=info msg="StartContainer for \"cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363\" returns successfully"
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:59.804692191Z" level=info msg="Finish piping stderr of container \"cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363\""
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:59.804929346Z" level=info msg="Finish piping stdout of container \"cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363\""
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:59.806705559Z" level=info msg="TaskExit event &TaskExit{ContainerID:cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363,ID:cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363,Pid:6823,ExitStatus:1,ExitedAt:2021-08-16 22:43:59.806074276 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:59.872972095Z" level=info msg="shim disconnected" id=cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:43:59.873192054Z" level=error msg="copy shim log" error="read /proc/self/fd/131: file already closed"
	Aug 16 22:44:00 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:44:00.326471350Z" level=info msg="RemoveContainer for \"754475885127981608f71b96bb2e3ba5252c57ef4af4a244323d808ff263c78f\""
	Aug 16 22:44:00 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:44:00.351828125Z" level=info msg="RemoveContainer for \"754475885127981608f71b96bb2e3ba5252c57ef4af4a244323d808ff263c78f\" returns successfully"
	Aug 16 22:44:04 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:44:04.053397937Z" level=info msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Aug 16 22:44:04 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:44:04.058967013Z" level=info msg="trying next host" error="failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" host=fake.domain
	Aug 16 22:44:04 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:44:04.061279204Z" level=error msg="PullImage \"fake.domain/k8s.gcr.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Aug 16 22:44:17 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:44:17.257567766Z" level=info msg="Finish piping stdout of container \"dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79\""
	Aug 16 22:44:17 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:44:17.257765243Z" level=info msg="Finish piping stderr of container \"dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79\""
	Aug 16 22:44:17 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:44:17.261754719Z" level=info msg="TaskExit event &TaskExit{ContainerID:dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79,ID:dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79,Pid:6676,ExitStatus:255,ExitedAt:2021-08-16 22:44:17.260803906 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:44:17 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:44:17.320806830Z" level=info msg="shim disconnected" id=dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79
	Aug 16 22:44:17 default-k8s-different-port-20210816223418-6986 containerd[2156]: time="2021-08-16T22:44:17.320947591Z" level=error msg="copy shim log" error="read /proc/self/fd/118: file already closed"
	
	* 
	* ==> coredns [def14af399b574c5412b6c393d65b7d0b6ae9f71d7709f25fe7a7956acf00e0c] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.0
	linux/amd64, go1.15.3, 054c9ae
	[INFO] Reloading
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] plugin/reload: Running configuration MD5 = 7ae91e86dd75dee9ae501cb58003198b
	[INFO] Reloading complete
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [  +5.482425] systemd-fstab-generator[1162]: Ignoring "noauto" for root device
	[  +0.037511] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +1.083932] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1735 comm=systemd-network
	[  +0.885849] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.863051] vboxguest: loading out-of-tree module taints kernel.
	[  +0.006670] vboxguest: PCI device not found, probably running on physical hardware.
	[Aug16 22:38] systemd-fstab-generator[2068]: Ignoring "noauto" for root device
	[  +0.425362] systemd-fstab-generator[2101]: Ignoring "noauto" for root device
	[  +0.138611] systemd-fstab-generator[2114]: Ignoring "noauto" for root device
	[  +0.203668] systemd-fstab-generator[2144]: Ignoring "noauto" for root device
	[  +8.552234] systemd-fstab-generator[2339]: Ignoring "noauto" for root device
	[ +31.092805] kauditd_printk_skb: 38 callbacks suppressed
	[ +12.980841] kauditd_printk_skb: 83 callbacks suppressed
	[Aug16 22:39] kauditd_printk_skb: 98 callbacks suppressed
	[ +10.668181] kauditd_printk_skb: 5 callbacks suppressed
	[ +20.315098] NFSD: Unable to end grace period: -110
	[Aug16 22:43] systemd-fstab-generator[5208]: Ignoring "noauto" for root device
	[ +16.424678] systemd-fstab-generator[5623]: Ignoring "noauto" for root device
	[ +14.790691] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.059674] kauditd_printk_skb: 68 callbacks suppressed
	[  +7.608074] kauditd_printk_skb: 14 callbacks suppressed
	[Aug16 22:44] systemd-fstab-generator[6871]: Ignoring "noauto" for root device
	[  +0.732611] systemd-fstab-generator[6927]: Ignoring "noauto" for root device
	[  +1.062790] systemd-fstab-generator[6982]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [57f34c516cefdbad7bb2b9fc925e4c38032ca0dcee3b7186494d5465f73371e8] <==
	* 2021-08-16 22:43:23.891985 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
	2021-08-16 22:43:23.905826 I | etcdserver: e5596a975f8061c0 as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2021/08/16 22:43:23 INFO: e5596a975f8061c0 switched to configuration voters=(16526357505987600832)
	2021-08-16 22:43:23.910102 I | etcdserver/membership: added member e5596a975f8061c0 [https://192.168.50.186:2380] to cluster e001ea9e448e2c
	2021-08-16 22:43:23.918571 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2021-08-16 22:43:23.918814 I | embed: listening for metrics on http://127.0.0.1:2381
	2021-08-16 22:43:23.918905 I | embed: listening for peers on 192.168.50.186:2380
	raft2021/08/16 22:43:24 INFO: e5596a975f8061c0 is starting a new election at term 1
	raft2021/08/16 22:43:24 INFO: e5596a975f8061c0 became candidate at term 2
	raft2021/08/16 22:43:24 INFO: e5596a975f8061c0 received MsgVoteResp from e5596a975f8061c0 at term 2
	raft2021/08/16 22:43:24 INFO: e5596a975f8061c0 became leader at term 2
	raft2021/08/16 22:43:24 INFO: raft.node: e5596a975f8061c0 elected leader e5596a975f8061c0 at term 2
	2021-08-16 22:43:24.261964 I | etcdserver: published {Name:default-k8s-different-port-20210816223418-6986 ClientURLs:[https://192.168.50.186:2379]} to cluster e001ea9e448e2c
	2021-08-16 22:43:24.263575 I | embed: ready to serve client requests
	2021-08-16 22:43:24.272425 I | etcdserver: setting up the initial cluster version to 3.4
	2021-08-16 22:43:24.273101 I | embed: ready to serve client requests
	2021-08-16 22:43:24.282779 I | embed: serving client requests on 127.0.0.1:2379
	2021-08-16 22:43:24.285054 I | embed: serving client requests on 192.168.50.186:2379
	2021-08-16 22:43:24.285291 N | etcdserver/membership: set the initial cluster version to 3.4
	2021-08-16 22:43:24.285423 I | etcdserver/api: enabled capabilities for version 3.4
	2021-08-16 22:43:40.337207 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:43:41.515116 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:43:48.547793 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/coredns\" " with result "range_response_count:1 size:612" took too long (144.579592ms) to execute
	2021-08-16 22:43:51.515451 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2021-08-16 22:44:01.515842 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	* 
	* ==> kernel <==
	*  22:45:49 up 8 min,  0 users,  load average: 0.44, 0.73, 0.39
	Linux default-k8s-different-port-20210816223418-6986 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [ee6e3ad90fb4e19819658b68382724e6a5cf8fe284be81a4b38efc7636b1dde1] <==
	* W0816 22:45:41.527060       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:45:41.601547       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:45:41.636543       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0816 22:45:41.849720       1 client.go:360] parsed scheme: "passthrough"
	I0816 22:45:41.850442       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0816 22:45:41.850732       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0816 22:45:41.955476       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:45:41.978984       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:45:42.156088       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:45:42.165731       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:45:42.924009       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:45:43.225855       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:45:43.467589       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:45:46.063886       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:45:46.083439       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:45:46.202226       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:45:46.745643       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:45:46.920727       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	W0816 22:45:48.105857       1 clientconn.go:1223] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: i/o timeout". Reconnecting...
	I0816 22:45:49.748865       1 trace.go:205] Trace[1926813337]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: (16-Aug-2021 22:44:49.748) (total time: 60000ms):
	Trace[1926813337]: [1m0.000014641s] [1m0.000014641s] END
	E0816 22:45:49.749186       1 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
	I0816 22:45:49.750024       1 trace.go:205] Trace[383870136]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.21.3 (linux/amd64) kubernetes/ca643a4,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (16-Aug-2021 22:44:49.748) (total time: 60001ms):
	Trace[383870136]: [1m0.00120119s] [1m0.00120119s] END
	E0816 22:45:49.750703       1 wrap.go:54] timeout or abort while handling: GET "/api/v1/nodes"
	
	* 
	* ==> kube-controller-manager [b98cd396fdb126a029db11a0dc2fba55c059c5e1325b67b4a634d563af29288b] <==
	* E0816 22:43:50.293231       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:50.312112       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:43:50.322851       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:50.323848       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:43:50.333753       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:43:50.334111       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:50.412580       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:43:50.412653       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:50.414212       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	E0816 22:43:50.417056       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:43:50.452424       1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" failed with pods "kubernetes-dashboard-6fcdf4f6d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0816 22:43:50.452951       1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" failed with pods "dashboard-metrics-scraper-8685c45546-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0816 22:43:50.453019       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-6fcdf4f6d-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:43:50.453035       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-8685c45546-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0816 22:43:50.521534       1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6fcdf4f6d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6fcdf4f6d-7gstk"
	I0816 22:43:50.635016       1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-8685c45546-nctsf"
	E0816 22:44:15.468466       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:44:15.903027       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:44:45.504006       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:44:45.939848       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:45:14.504310       1 node_lifecycle_controller.go:1107] Error updating node default-k8s-different-port-20210816223418-6986: Timeout: request did not complete within requested timeout context deadline exceeded
	E0816 22:45:15.529068       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:45:15.986596       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0816 22:45:45.565646       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0816 22:45:46.037580       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [59465014a8ff9c336e193f30707376555908063b6c426b0916db7ecc4bcbdea5] <==
	* I0816 22:43:46.550878       1 node.go:172] Successfully retrieved node IP: 192.168.50.186
	I0816 22:43:46.550950       1 server_others.go:140] Detected node IP 192.168.50.186
	W0816 22:43:46.551113       1 server_others.go:598] Unknown proxy mode "", assuming iptables proxy
	W0816 22:43:46.603198       1 server_others.go:197] No iptables support for IPv6: exit status 3
	I0816 22:43:46.604257       1 server_others.go:208] kube-proxy running in single-stack IPv4 mode
	I0816 22:43:46.604284       1 server_others.go:212] Using iptables Proxier.
	I0816 22:43:46.604671       1 server.go:643] Version: v1.21.3
	I0816 22:43:46.606945       1 config.go:224] Starting endpoint slice config controller
	I0816 22:43:46.608974       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0816 22:43:46.608754       1 config.go:315] Starting service config controller
	I0816 22:43:46.610177       1 shared_informer.go:240] Waiting for caches to sync for service config
	W0816 22:43:46.628591       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	W0816 22:43:46.631027       1 warnings.go:70] discovery.k8s.io/v1beta1 EndpointSlice is deprecated in v1.21+, unavailable in v1.25+; use discovery.k8s.io/v1 EndpointSlice
	I0816 22:43:46.710924       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0816 22:43:46.711125       1 shared_informer.go:247] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [91c23817fb7abda2b7c16241219d51e4ed678c5cb58792aa6ddbebafd8c8e1d7] <==
	* I0816 22:43:28.847982       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0816 22:43:28.857263       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:43:28.871199       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0816 22:43:28.875806       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:43:28.876635       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:43:28.879652       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0816 22:43:28.880282       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0816 22:43:28.880793       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:43:28.880851       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:43:28.880928       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:43:28.880986       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0816 22:43:28.881036       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0816 22:43:28.881082       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:43:28.881093       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:43:28.881264       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:43:29.699413       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0816 22:43:29.802196       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0816 22:43:29.809598       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:43:29.912489       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0816 22:43:30.006080       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0816 22:43:30.009886       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0816 22:43:30.027658       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0816 22:43:30.177599       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0816 22:43:30.333038       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0816 22:43:32.148167       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:37:38 UTC, end at Mon 2021-08-16 22:45:50 UTC. --
	Aug 16 22:43:50 default-k8s-different-port-20210816223418-6986 kubelet[5632]: I0816 22:43:50.688008    5632 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/806d3966-d956-400b-b825-eb1393026138-tmp-volume\") pod \"kubernetes-dashboard-6fcdf4f6d-7gstk\" (UID: \"806d3966-d956-400b-b825-eb1393026138\") "
	Aug 16 22:43:50 default-k8s-different-port-20210816223418-6986 kubelet[5632]: I0816 22:43:50.744073    5632 topology_manager.go:187] "Topology Admit Handler"
	Aug 16 22:43:50 default-k8s-different-port-20210816223418-6986 kubelet[5632]: I0816 22:43:50.790433    5632 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-984n5\" (UniqueName: \"kubernetes.io/projected/3023a30f-e167-4d2d-9cb7-5f01b3a89700-kube-api-access-984n5\") pod \"dashboard-metrics-scraper-8685c45546-nctsf\" (UID: \"3023a30f-e167-4d2d-9cb7-5f01b3a89700\") "
	Aug 16 22:43:50 default-k8s-different-port-20210816223418-6986 kubelet[5632]: I0816 22:43:50.795038    5632 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/3023a30f-e167-4d2d-9cb7-5f01b3a89700-tmp-volume\") pod \"dashboard-metrics-scraper-8685c45546-nctsf\" (UID: \"3023a30f-e167-4d2d-9cb7-5f01b3a89700\") "
	Aug 16 22:43:51 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:43:51.823508    5632 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:43:51 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:43:51.823559    5632 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:43:51 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:43:51.823847    5632 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5w76r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{
Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,Vol
umeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-pbxnr_kube-system(fa2d27a5-b243-4a8f-9450-b834d1ce5bb0): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:43:51 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:43:51.823907    5632 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-pbxnr" podUID=fa2d27a5-b243-4a8f-9450-b834d1ce5bb0
	Aug 16 22:43:52 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:43:52.193302    5632 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-7c784ccb57-pbxnr" podUID=fa2d27a5-b243-4a8f-9450-b834d1ce5bb0
	Aug 16 22:43:59 default-k8s-different-port-20210816223418-6986 kubelet[5632]: I0816 22:43:59.293048    5632 scope.go:111] "RemoveContainer" containerID="754475885127981608f71b96bb2e3ba5252c57ef4af4a244323d808ff263c78f"
	Aug 16 22:44:00 default-k8s-different-port-20210816223418-6986 kubelet[5632]: I0816 22:44:00.308642    5632 scope.go:111] "RemoveContainer" containerID="754475885127981608f71b96bb2e3ba5252c57ef4af4a244323d808ff263c78f"
	Aug 16 22:44:00 default-k8s-different-port-20210816223418-6986 kubelet[5632]: I0816 22:44:00.310201    5632 scope.go:111] "RemoveContainer" containerID="cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363"
	Aug 16 22:44:00 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:44:00.311651    5632 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-nctsf_kubernetes-dashboard(3023a30f-e167-4d2d-9cb7-5f01b3a89700)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-nctsf" podUID=3023a30f-e167-4d2d-9cb7-5f01b3a89700
	Aug 16 22:44:01 default-k8s-different-port-20210816223418-6986 kubelet[5632]: I0816 22:44:01.316243    5632 scope.go:111] "RemoveContainer" containerID="cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363"
	Aug 16 22:44:01 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:44:01.317822    5632 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-nctsf_kubernetes-dashboard(3023a30f-e167-4d2d-9cb7-5f01b3a89700)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-nctsf" podUID=3023a30f-e167-4d2d-9cb7-5f01b3a89700
	Aug 16 22:44:02 default-k8s-different-port-20210816223418-6986 kubelet[5632]: I0816 22:44:02.319878    5632 scope.go:111] "RemoveContainer" containerID="cf018e1aa6e203b5186b274b1c6b632d80795bd4c12e8d1c8ec84f07f1c84363"
	Aug 16 22:44:02 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:44:02.321612    5632 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8685c45546-nctsf_kubernetes-dashboard(3023a30f-e167-4d2d-9cb7-5f01b3a89700)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-8685c45546-nctsf" podUID=3023a30f-e167-4d2d-9cb7-5f01b3a89700
	Aug 16 22:44:04 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:44:04.063102    5632 remote_image.go:114] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:44:04 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:44:04.063318    5632 kuberuntime_image.go:51] "Failed to pull image" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to resolve reference \"fake.domain/k8s.gcr.io/echoserver:1.4\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host" image="fake.domain/k8s.gcr.io/echoserver:1.4"
	Aug 16 22:44:04 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:44:04.067426    5632 kuberuntime_manager.go:864] container &Container{Name:metrics-server,Image:fake.domain/k8s.gcr.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=15s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{314572800 0} {<nil>} 300Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5w76r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{
Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez?exclude=readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz?exclude=livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,Vol
umeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-7c784ccb57-pbxnr_kube-system(fa2d27a5-b243-4a8f-9450-b834d1ce5bb0): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/k8s.gcr.io/echoserver:1.4": failed to resolve reference "fake.domain/k8s.gcr.io/echoserver:1.4": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Aug 16 22:44:04 default-k8s-different-port-20210816223418-6986 kubelet[5632]: E0816 22:44:04.067833    5632 pod_workers.go:190] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"rpc error: code = Unknown desc = failed to pull and unpack image \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/k8s.gcr.io/echoserver:1.4\\\": failed to do request: Head https://fake.domain/v2/k8s.gcr.io/echoserver/manifests/1.4: dial tcp: lookup fake.domain on 192.168.122.1:53: no such host\"" pod="kube-system/metrics-server-7c784ccb57-pbxnr" podUID=fa2d27a5-b243-4a8f-9450-b834d1ce5bb0
	Aug 16 22:44:04 default-k8s-different-port-20210816223418-6986 kubelet[5632]: I0816 22:44:04.389451    5632 dynamic_cafile_content.go:182] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Aug 16 22:44:04 default-k8s-different-port-20210816223418-6986 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:44:04 default-k8s-different-port-20210816223418-6986 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:44:04 default-k8s-different-port-20210816223418-6986 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	* 
	* ==> kubernetes-dashboard [091f1b89bc6ac929a99a8c2d63bb67a16934b4902698c9995a38073d0dda1f1f] <==
	* 2021/08/16 22:43:52 Using namespace: kubernetes-dashboard
	2021/08/16 22:43:52 Using in-cluster config to connect to apiserver
	2021/08/16 22:43:53 Using secret token for csrf signing
	2021/08/16 22:43:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/08/16 22:43:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/08/16 22:43:53 Successful initial request to the apiserver, version: v1.21.3
	2021/08/16 22:43:53 Generating JWE encryption key
	2021/08/16 22:43:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/08/16 22:43:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/08/16 22:43:53 Initializing JWE encryption key from synchronized object
	2021/08/16 22:43:53 Creating in-cluster Sidecar client
	2021/08/16 22:43:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/08/16 22:43:53 Serving insecurely on HTTP port: 9090
	2021/08/16 22:43:52 Starting overwatch
	
	* 
	* ==> storage-provisioner [dfd15db9428f264623cd687de765401bc3b7f7293c3ecad264cc26e1ff22cd79] <==
	* 	/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:880 +0x4af
	
	goroutine 78 [sync.Cond.Wait]:
	sync.runtime_notifyListWait(0xc000110110, 0x0)
		/usr/local/go/src/runtime/sema.go:513 +0xf8
	sync.(*Cond).Wait(0xc000110100)
		/usr/local/go/src/sync/cond.go:56 +0x99
	k8s.io/client-go/util/workqueue.(*Type).Get(0xc000100300, 0x0, 0x0, 0x404020302030300)
		/Users/medya/go/pkg/mod/k8s.io/client-go@v0.20.5/util/workqueue/queue.go:145 +0x89
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).processNextVolumeWorkItem(0xc00040ea00, 0x18e5530, 0xc000110240, 0x203000)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:990 +0x3e
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).runVolumeWorker(...)
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:929
	sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1.3()
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x5c
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00038ce20)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:155 +0x5f
	k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00038ce20, 0x18b3d60, 0xc00039a420, 0x1, 0xc00014a5a0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:156 +0x9b
	k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00038ce20, 0x3b9aca00, 0x0, 0x1, 0xc00014a5a0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:133 +0x98
	k8s.io/apimachinery/pkg/util/wait.Until(0xc00038ce20, 0x3b9aca00, 0xc00014a5a0)
		/Users/medya/go/pkg/mod/k8s.io/apimachinery@v0.20.5/pkg/util/wait/wait.go:90 +0x4d
	created by sigs.k8s.io/sig-storage-lib-external-provisioner/v6/controller.(*ProvisionController).Run.func1
		/Users/medya/go/pkg/mod/sigs.k8s.io/sig-storage-lib-external-provisioner/v6@v6.3.0/controller/controller.go:881 +0x3d6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:45:49.753186   20976 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.21.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: stream error: stream ID 1; INTERNAL_ERROR
	 output: "\n** stderr ** \nUnable to connect to the server: stream error: stream ID 1; INTERNAL_ERROR\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/default-k8s-different-port/serial/Pause (106.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (84.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20210816224431-6986 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-20210816224431-6986 --alsologtostderr -v=1: exit status 80 (2.110695176s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-20210816224431-6986 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 22:47:19.923797   22015 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:47:19.924006   22015 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:47:19.924016   22015 out.go:311] Setting ErrFile to fd 2...
	I0816 22:47:19.924021   22015 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:47:19.924152   22015 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:47:19.924390   22015 out.go:305] Setting JSON to false
	I0816 22:47:19.924411   22015 mustload.go:65] Loading cluster: newest-cni-20210816224431-6986
	I0816 22:47:19.924824   22015 config.go:177] Loaded profile config "newest-cni-20210816224431-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0816 22:47:19.925347   22015 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:47:19.925393   22015 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:47:19.937502   22015 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43823
	I0816 22:47:19.937937   22015 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:47:19.938511   22015 main.go:130] libmachine: Using API Version  1
	I0816 22:47:19.938530   22015 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:47:19.938897   22015 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:47:19.939067   22015 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetState
	I0816 22:47:19.941906   22015 host.go:66] Checking if "newest-cni-20210816224431-6986" exists ...
	I0816 22:47:19.942361   22015 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:47:19.942401   22015 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:47:19.952600   22015 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35549
	I0816 22:47:19.953078   22015 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:47:19.953547   22015 main.go:130] libmachine: Using API Version  1
	I0816 22:47:19.953570   22015 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:47:19.953910   22015 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:47:19.954111   22015 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:47:19.954800   22015 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=
true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso https://github.com/kubernetes/minikube/releases/download/v1.22.0-1628622362-12032/minikube-v1.22.0-1628622362-12032.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.22.0-1628622362-12032.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plu
gin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-20210816224431-6986 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0816 22:47:19.956908   22015 out.go:177] * Pausing node newest-cni-20210816224431-6986 ... 
	I0816 22:47:19.956939   22015 host.go:66] Checking if "newest-cni-20210816224431-6986" exists ...
	I0816 22:47:19.957344   22015 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:47:19.957389   22015 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:47:19.968188   22015 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:40691
	I0816 22:47:19.968586   22015 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:47:19.969063   22015 main.go:130] libmachine: Using API Version  1
	I0816 22:47:19.969084   22015 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:47:19.969368   22015 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:47:19.969566   22015 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:47:19.969726   22015 ssh_runner.go:149] Run: systemctl --version
	I0816 22:47:19.969749   22015 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:47:19.974989   22015 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:47:19.975420   22015 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:47:19.975448   22015 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:47:19.975579   22015 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:47:19.975762   22015 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:47:19.975900   22015 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:47:19.976001   22015 sshutil.go:53] new ssh client: &{IP:192.168.116.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa Username:docker}
	I0816 22:47:20.065468   22015 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:47:20.075964   22015 pause.go:50] kubelet running: true
	I0816 22:47:20.076031   22015 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:47:20.248583   22015 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:47:20.248661   22015 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:47:20.377712   22015 cri.go:76] found id: "5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217"
	I0816 22:47:20.377758   22015 cri.go:76] found id: "02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9"
	I0816 22:47:20.377769   22015 cri.go:76] found id: "148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019"
	I0816 22:47:20.377776   22015 cri.go:76] found id: "22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a"
	I0816 22:47:20.377781   22015 cri.go:76] found id: "7b31f609984fc54447e9df1259fba6a55768fcd53cfb0dc5bcfd7400326bdf1c"
	I0816 22:47:20.377787   22015 cri.go:76] found id: ""
	I0816 22:47:20.377839   22015 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0816 22:47:20.411847   22015 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9","pid":2790,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9/rootfs","created":"2021-08-16T22:47:09.351791189Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019","pid":2744,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019","rootfs":"/run/containerd/io.containerd.runtime.v2.ta
sk/k8s.io/148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019/rootfs","created":"2021-08-16T22:47:02.47081031Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a","pid":2651,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a/rootfs","created":"2021-08-16T22:47:01.472643245Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"29ad7d9cab12627161fb254dbc9f25f5e3cc9210bb139f92bf5fc22fe5dd2b7f"},"owner":"root"},{"oci
Version":"1.0.2-dev","id":"29ad7d9cab12627161fb254dbc9f25f5e3cc9210bb139f92bf5fc22fe5dd2b7f","pid":2617,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/29ad7d9cab12627161fb254dbc9f25f5e3cc9210bb139f92bf5fc22fe5dd2b7f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/29ad7d9cab12627161fb254dbc9f25f5e3cc9210bb139f92bf5fc22fe5dd2b7f/rootfs","created":"2021-08-16T22:47:01.088102431Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"29ad7d9cab12627161fb254dbc9f25f5e3cc9210bb139f92bf5fc22fe5dd2b7f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210816224431-6986_4cf2cf8be7a9b807bb48c482cec287a1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0","pid":2459,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0","rootfs":"/r
un/containerd/io.containerd.runtime.v2.task/k8s.io/3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0/rootfs","created":"2021-08-16T22:46:47.600645535Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-20210816224431-6986_19ed5b26e62fa127352c9e97d58d0833"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217","pid":2848,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217/rootfs","created":"2021-08-16T22:47:16.235328293Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-ty
pe":"container","io.kubernetes.cri.sandbox-id":"ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5","pid":2704,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5/rootfs","created":"2021-08-16T22:47:02.075572067Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210816224431-6986_7e2f77f67b13e20a69cee33bb38b10ea"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf","pid":2531,"status":"running","bund
le":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf/rootfs","created":"2021-08-16T22:46:59.064984597Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210816224431-6986_882330b4d6892cdf4d64f1f1052d2a50"},"owner":"root"}]
	I0816 22:47:20.411998   22015 cri.go:113] list returned 8 containers
	I0816 22:47:20.412012   22015 cri.go:116] container: {ID:02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9 Status:running}
	I0816 22:47:20.412023   22015 cri.go:116] container: {ID:148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019 Status:running}
	I0816 22:47:20.412028   22015 cri.go:116] container: {ID:22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a Status:running}
	I0816 22:47:20.412032   22015 cri.go:116] container: {ID:29ad7d9cab12627161fb254dbc9f25f5e3cc9210bb139f92bf5fc22fe5dd2b7f Status:running}
	I0816 22:47:20.412037   22015 cri.go:118] skipping 29ad7d9cab12627161fb254dbc9f25f5e3cc9210bb139f92bf5fc22fe5dd2b7f - not in ps
	I0816 22:47:20.412041   22015 cri.go:116] container: {ID:3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0 Status:running}
	I0816 22:47:20.412045   22015 cri.go:118] skipping 3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0 - not in ps
	I0816 22:47:20.412048   22015 cri.go:116] container: {ID:5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217 Status:running}
	I0816 22:47:20.412052   22015 cri.go:116] container: {ID:8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5 Status:running}
	I0816 22:47:20.412056   22015 cri.go:118] skipping 8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5 - not in ps
	I0816 22:47:20.412060   22015 cri.go:116] container: {ID:ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf Status:running}
	I0816 22:47:20.412064   22015 cri.go:118] skipping ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf - not in ps
	I0816 22:47:20.412107   22015 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9
	I0816 22:47:20.442158   22015 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9 148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019
	I0816 22:47:20.463206   22015 retry.go:31] will retry after 276.165072ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9 148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:47:20Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0816 22:47:20.739664   22015 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:47:20.751628   22015 pause.go:50] kubelet running: false
	I0816 22:47:20.751693   22015 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:47:20.902578   22015 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:47:20.902679   22015 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:47:21.024024   22015 cri.go:76] found id: "5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217"
	I0816 22:47:21.024051   22015 cri.go:76] found id: "02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9"
	I0816 22:47:21.024058   22015 cri.go:76] found id: "148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019"
	I0816 22:47:21.024063   22015 cri.go:76] found id: "22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a"
	I0816 22:47:21.024068   22015 cri.go:76] found id: "7b31f609984fc54447e9df1259fba6a55768fcd53cfb0dc5bcfd7400326bdf1c"
	I0816 22:47:21.024073   22015 cri.go:76] found id: ""
	I0816 22:47:21.024120   22015 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0816 22:47:21.053898   22015 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9","pid":2790,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9/rootfs","created":"2021-08-16T22:47:09.351791189Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019","pid":2744,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019","rootfs":"/run/containerd/io.containerd.runtime.v2.tas
k/k8s.io/148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019/rootfs","created":"2021-08-16T22:47:02.47081031Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a","pid":2651,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a/rootfs","created":"2021-08-16T22:47:01.472643245Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"29ad7d9cab12627161fb254dbc9f25f5e3cc9210bb139f92bf5fc22fe5dd2b7f"},"owner":"root"},{"ociV
ersion":"1.0.2-dev","id":"29ad7d9cab12627161fb254dbc9f25f5e3cc9210bb139f92bf5fc22fe5dd2b7f","pid":2617,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/29ad7d9cab12627161fb254dbc9f25f5e3cc9210bb139f92bf5fc22fe5dd2b7f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/29ad7d9cab12627161fb254dbc9f25f5e3cc9210bb139f92bf5fc22fe5dd2b7f/rootfs","created":"2021-08-16T22:47:01.088102431Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"29ad7d9cab12627161fb254dbc9f25f5e3cc9210bb139f92bf5fc22fe5dd2b7f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210816224431-6986_4cf2cf8be7a9b807bb48c482cec287a1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0","pid":2459,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0","rootfs":"/ru
n/containerd/io.containerd.runtime.v2.task/k8s.io/3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0/rootfs","created":"2021-08-16T22:46:47.600645535Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-20210816224431-6986_19ed5b26e62fa127352c9e97d58d0833"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217","pid":2848,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217/rootfs","created":"2021-08-16T22:47:16.235328293Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-typ
e":"container","io.kubernetes.cri.sandbox-id":"ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5","pid":2704,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5/rootfs","created":"2021-08-16T22:47:02.075572067Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210816224431-6986_7e2f77f67b13e20a69cee33bb38b10ea"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf","pid":2531,"status":"running","bundl
e":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf/rootfs","created":"2021-08-16T22:46:59.064984597Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210816224431-6986_882330b4d6892cdf4d64f1f1052d2a50"},"owner":"root"}]
	I0816 22:47:21.054014   22015 cri.go:113] list returned 8 containers
	I0816 22:47:21.054026   22015 cri.go:116] container: {ID:02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9 Status:paused}
	I0816 22:47:21.054036   22015 cri.go:122] skipping {02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9 paused}: state = "paused", want "running"
	I0816 22:47:21.054047   22015 cri.go:116] container: {ID:148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019 Status:running}
	I0816 22:47:21.054051   22015 cri.go:116] container: {ID:22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a Status:running}
	I0816 22:47:21.054056   22015 cri.go:116] container: {ID:29ad7d9cab12627161fb254dbc9f25f5e3cc9210bb139f92bf5fc22fe5dd2b7f Status:running}
	I0816 22:47:21.054061   22015 cri.go:118] skipping 29ad7d9cab12627161fb254dbc9f25f5e3cc9210bb139f92bf5fc22fe5dd2b7f - not in ps
	I0816 22:47:21.054065   22015 cri.go:116] container: {ID:3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0 Status:running}
	I0816 22:47:21.054069   22015 cri.go:118] skipping 3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0 - not in ps
	I0816 22:47:21.054072   22015 cri.go:116] container: {ID:5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217 Status:running}
	I0816 22:47:21.054077   22015 cri.go:116] container: {ID:8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5 Status:running}
	I0816 22:47:21.054081   22015 cri.go:118] skipping 8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5 - not in ps
	I0816 22:47:21.054084   22015 cri.go:116] container: {ID:ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf Status:running}
	I0816 22:47:21.054088   22015 cri.go:118] skipping ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf - not in ps
	I0816 22:47:21.054122   22015 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019
	I0816 22:47:21.075128   22015 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019 22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a
	I0816 22:47:21.094342   22015 retry.go:31] will retry after 540.190908ms: runc: sudo runc --root /run/containerd/runc/k8s.io pause 148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019 22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:47:21Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	I0816 22:47:21.635072   22015 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:47:21.646198   22015 pause.go:50] kubelet running: false
	I0816 22:47:21.646243   22015 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0816 22:47:21.796117   22015 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:running Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0816 22:47:21.796187   22015 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0816 22:47:21.910695   22015 cri.go:76] found id: "5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217"
	I0816 22:47:21.910716   22015 cri.go:76] found id: "02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9"
	I0816 22:47:21.910721   22015 cri.go:76] found id: "148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019"
	I0816 22:47:21.910725   22015 cri.go:76] found id: "22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a"
	I0816 22:47:21.910728   22015 cri.go:76] found id: "7b31f609984fc54447e9df1259fba6a55768fcd53cfb0dc5bcfd7400326bdf1c"
	I0816 22:47:21.910733   22015 cri.go:76] found id: ""
	I0816 22:47:21.910770   22015 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0816 22:47:21.939115   22015 cri.go:103] JSON = [{"ociVersion":"1.0.2-dev","id":"02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9","pid":2790,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9/rootfs","created":"2021-08-16T22:47:09.351791189Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019","pid":2744,"status":"paused","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019","rootfs":"/run/containerd/io.containerd.runtime.v2.task
/k8s.io/148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019/rootfs","created":"2021-08-16T22:47:02.47081031Z","annotations":{"io.kubernetes.cri.container-name":"kube-apiserver","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a","pid":2651,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a/rootfs","created":"2021-08-16T22:47:01.472643245Z","annotations":{"io.kubernetes.cri.container-name":"kube-scheduler","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.sandbox-id":"29ad7d9cab12627161fb254dbc9f25f5e3cc9210bb139f92bf5fc22fe5dd2b7f"},"owner":"root"},{"ociVe
rsion":"1.0.2-dev","id":"29ad7d9cab12627161fb254dbc9f25f5e3cc9210bb139f92bf5fc22fe5dd2b7f","pid":2617,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/29ad7d9cab12627161fb254dbc9f25f5e3cc9210bb139f92bf5fc22fe5dd2b7f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/29ad7d9cab12627161fb254dbc9f25f5e3cc9210bb139f92bf5fc22fe5dd2b7f/rootfs","created":"2021-08-16T22:47:01.088102431Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"29ad7d9cab12627161fb254dbc9f25f5e3cc9210bb139f92bf5fc22fe5dd2b7f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-20210816224431-6986_4cf2cf8be7a9b807bb48c482cec287a1"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0","pid":2459,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0","rootfs":"/run
/containerd/io.containerd.runtime.v2.task/k8s.io/3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0/rootfs","created":"2021-08-16T22:46:47.600645535Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-20210816224431-6986_19ed5b26e62fa127352c9e97d58d0833"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217","pid":2848,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217/rootfs","created":"2021-08-16T22:47:16.235328293Z","annotations":{"io.kubernetes.cri.container-name":"kube-controller-manager","io.kubernetes.cri.container-type
":"container","io.kubernetes.cri.sandbox-id":"ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5","pid":2704,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5/rootfs","created":"2021-08-16T22:47:02.075572067Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-20210816224431-6986_7e2f77f67b13e20a69cee33bb38b10ea"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf","pid":2531,"status":"running","bundle
":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf/rootfs","created":"2021-08-16T22:46:59.064984597Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.sandbox-id":"ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-20210816224431-6986_882330b4d6892cdf4d64f1f1052d2a50"},"owner":"root"}]
	I0816 22:47:21.939231   22015 cri.go:113] list returned 8 containers
	I0816 22:47:21.939240   22015 cri.go:116] container: {ID:02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9 Status:paused}
	I0816 22:47:21.939251   22015 cri.go:122] skipping {02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9 paused}: state = "paused", want "running"
	I0816 22:47:21.939262   22015 cri.go:116] container: {ID:148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019 Status:paused}
	I0816 22:47:21.939267   22015 cri.go:122] skipping {148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019 paused}: state = "paused", want "running"
	I0816 22:47:21.939274   22015 cri.go:116] container: {ID:22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a Status:running}
	I0816 22:47:21.939279   22015 cri.go:116] container: {ID:29ad7d9cab12627161fb254dbc9f25f5e3cc9210bb139f92bf5fc22fe5dd2b7f Status:running}
	I0816 22:47:21.939286   22015 cri.go:118] skipping 29ad7d9cab12627161fb254dbc9f25f5e3cc9210bb139f92bf5fc22fe5dd2b7f - not in ps
	I0816 22:47:21.939293   22015 cri.go:116] container: {ID:3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0 Status:running}
	I0816 22:47:21.939301   22015 cri.go:118] skipping 3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0 - not in ps
	I0816 22:47:21.939307   22015 cri.go:116] container: {ID:5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217 Status:running}
	I0816 22:47:21.939317   22015 cri.go:116] container: {ID:8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5 Status:running}
	I0816 22:47:21.939325   22015 cri.go:118] skipping 8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5 - not in ps
	I0816 22:47:21.939330   22015 cri.go:116] container: {ID:ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf Status:running}
	I0816 22:47:21.939342   22015 cri.go:118] skipping ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf - not in ps
	I0816 22:47:21.939385   22015 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a
	I0816 22:47:21.958432   22015 ssh_runner.go:149] Run: sudo runc --root /run/containerd/runc/k8s.io pause 22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a 5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217
	I0816 22:47:21.977283   22015 out.go:177] 
	W0816 22:47:21.977475   22015 out.go:242] X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a 5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:47:21Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a 5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T22:47:21Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	W0816 22:47:21.977489   22015 out.go:242] * 
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	W0816 22:47:21.980356   22015 out.go:242] ╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────╮
	│                                                                              │
	│    * If the above advice does not help, please let us know:                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	│                                                                              │
	│    * Please attach the following file to the GitHub issue:                   │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                              │
	╰──────────────────────────────────────────────────────────────────────────────╯
	I0816 22:47:21.981883   22015 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p newest-cni-20210816224431-6986 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210816224431-6986 -n newest-cni-20210816224431-6986
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210816224431-6986 -n newest-cni-20210816224431-6986: exit status 2 (231.383134ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-20210816224431-6986 logs -n 25
E0816 22:47:48.586814    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816223156-6986/client.crt: no such file or directory
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p newest-cni-20210816224431-6986 logs -n 25: exit status 110 (40.878323391s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable dashboard -p                                        | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | embed-certs-20210816223333-6986                            |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:52 UTC | Mon, 16 Aug 2021 22:37:25 UTC |
	|         | default-k8s-different-port-20210816223418-6986             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:37:25 UTC | Mon, 16 Aug 2021 22:37:25 UTC |
	|         | default-k8s-different-port-20210816223418-6986             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816223156-6986                          | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:42:31 UTC |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                              |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:42:42 UTC | Mon, 16 Aug 2021 22:42:42 UTC |
	|         | no-preload-20210816223156-6986                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:18 UTC | Mon, 16 Aug 2021 22:43:37 UTC |
	|         | old-k8s-version-20210816223154-6986                        |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                         |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:43:44 UTC |
	|         | embed-certs-20210816223333-6986                            |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                              |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:43:47 UTC | Mon, 16 Aug 2021 22:43:47 UTC |
	|         | old-k8s-version-20210816223154-6986                        |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:37:25 UTC | Mon, 16 Aug 2021 22:43:52 UTC |
	|         | default-k8s-different-port-20210816223418-6986             |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2                        |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:43:55 UTC | Mon, 16 Aug 2021 22:43:55 UTC |
	|         | embed-certs-20210816223333-6986                            |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:44:03 UTC | Mon, 16 Aug 2021 22:44:03 UTC |
	|         | default-k8s-different-port-20210816223418-6986             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:44:30 UTC | Mon, 16 Aug 2021 22:44:30 UTC |
	|         | no-preload-20210816223156-6986                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:44:31 UTC | Mon, 16 Aug 2021 22:44:31 UTC |
	|         | no-preload-20210816223156-6986                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:45:42 UTC | Mon, 16 Aug 2021 22:45:43 UTC |
	|         | embed-certs-20210816223333-6986                            |                                                |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:45:43 UTC | Mon, 16 Aug 2021 22:45:43 UTC |
	|         | embed-certs-20210816223333-6986                            |                                                |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:45:50 UTC | Mon, 16 Aug 2021 22:45:51 UTC |
	|         | default-k8s-different-port-20210816223418-6986             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:45:51 UTC | Mon, 16 Aug 2021 22:45:51 UTC |
	|         | default-k8s-different-port-20210816223418-6986             |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210816224431-6986 --memory=2200            | newest-cni-20210816224431-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:44:31 UTC | Mon, 16 Aug 2021 22:45:57 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=containerd              |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210816224431-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:45:57 UTC | Mon, 16 Aug 2021 22:45:58 UTC |
	|         | newest-cni-20210816224431-6986                             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210816224431-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:45:58 UTC | Mon, 16 Aug 2021 22:46:00 UTC |
	|         | newest-cni-20210816224431-6986                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210816224431-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:46:00 UTC | Mon, 16 Aug 2021 22:46:00 UTC |
	|         | newest-cni-20210816224431-6986                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:46:35 UTC | Mon, 16 Aug 2021 22:46:36 UTC |
	|         | old-k8s-version-20210816223154-6986                        |                                                |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:46:36 UTC | Mon, 16 Aug 2021 22:46:36 UTC |
	|         | old-k8s-version-20210816223154-6986                        |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210816224431-6986 --memory=2200            | newest-cni-20210816224431-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:46:00 UTC | Mon, 16 Aug 2021 22:47:19 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=containerd              |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210816224431-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:47:19 UTC | Mon, 16 Aug 2021 22:47:19 UTC |
	|         | newest-cni-20210816224431-6986                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:46:00
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:46:00.956222   21514 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:46:00.956287   21514 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:46:00.956291   21514 out.go:311] Setting ErrFile to fd 2...
	I0816 22:46:00.956294   21514 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:46:00.956394   21514 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:46:00.956595   21514 out.go:305] Setting JSON to false
	I0816 22:46:00.992762   21514 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":5323,"bootTime":1629148638,"procs":166,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:46:00.992863   21514 start.go:121] virtualization: kvm guest
	I0816 22:46:00.995858   21514 out.go:177] * [newest-cni-20210816224431-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:46:00.997413   21514 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:46:00.996082   21514 notify.go:169] Checking for updates...
	I0816 22:46:00.998918   21514 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:46:01.000333   21514 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:46:01.001708   21514 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:46:01.002109   21514 config.go:177] Loaded profile config "newest-cni-20210816224431-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0816 22:46:01.002475   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:46:01.002538   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:46:01.013174   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46193
	I0816 22:46:01.013609   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:46:01.014137   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:46:01.014162   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:46:01.014568   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:46:01.014744   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:46:01.014913   21514 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:46:01.015252   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:46:01.015289   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:46:01.029317   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42513
	I0816 22:46:01.029739   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:46:01.030234   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:46:01.030256   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:46:01.030617   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:46:01.030779   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:46:01.061460   21514 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 22:46:01.061485   21514 start.go:278] selected driver: kvm2
	I0816 22:46:01.061518   21514 start.go:751] validating driver "kvm2" against &{Name:newest-cni-20210816224431-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816224431-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.116.132 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map
[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:46:01.061658   21514 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 22:46:01.062901   21514 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:46:01.063049   21514 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 22:46:01.074223   21514 install.go:137] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2 version is 1.22.0
	I0816 22:46:01.074557   21514 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 22:46:01.074586   21514 cni.go:93] Creating CNI manager for ""
	I0816 22:46:01.074600   21514 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:46:01.074625   21514 start_flags.go:277] config:
	{Name:newest-cni-20210816224431-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816224431-6986 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.116.132 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:f
alse default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:46:01.074737   21514 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:46:01.076704   21514 out.go:177] * Starting control plane node newest-cni-20210816224431-6986 in cluster newest-cni-20210816224431-6986
	I0816 22:46:01.076721   21514 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 22:46:01.076741   21514 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0816 22:46:01.076752   21514 cache.go:56] Caching tarball of preloaded images
	I0816 22:46:01.076861   21514 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 22:46:01.076876   21514 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on containerd
	I0816 22:46:01.076972   21514 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/config.json ...
	I0816 22:46:01.077116   21514 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:46:01.077136   21514 start.go:313] acquiring machines lock for newest-cni-20210816224431-6986: {Name:mk808edd60d1305a42bb85791729eff4573dbb15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 22:46:01.077179   21514 start.go:317] acquired machines lock for "newest-cni-20210816224431-6986" in 30.987µs
	I0816 22:46:01.077197   21514 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:46:01.077204   21514 fix.go:55] fixHost starting: 
	I0816 22:46:01.077461   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:46:01.077497   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:46:01.087093   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43639
	I0816 22:46:01.087457   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:46:01.087883   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:46:01.087904   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:46:01.088209   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:46:01.088376   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:46:01.088508   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetState
	I0816 22:46:01.091360   21514 fix.go:108] recreateIfNeeded on newest-cni-20210816224431-6986: state=Stopped err=<nil>
	I0816 22:46:01.091403   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	W0816 22:46:01.091557   21514 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:46:01.093564   21514 out.go:177] * Restarting existing kvm2 VM for "newest-cni-20210816224431-6986" ...
	I0816 22:46:01.093589   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Start
	I0816 22:46:01.093756   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Ensuring networks are active...
	I0816 22:46:01.095569   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Ensuring network default is active
	I0816 22:46:01.095849   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Ensuring network mk-newest-cni-20210816224431-6986 is active
	I0816 22:46:01.096218   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Getting domain xml...
	I0816 22:46:01.097869   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Creating domain...
	I0816 22:46:01.545908   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Waiting to get IP...
	I0816 22:46:01.547122   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:01.547603   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has current primary IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:01.547636   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Found IP for machine: 192.168.116.132
	I0816 22:46:01.547653   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Reserving static IP address...
	I0816 22:46:01.548190   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "newest-cni-20210816224431-6986", mac: "52:54:00:50:9a:fc", ip: "192.168.116.132"} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:44:46 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:01.548220   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Reserved static IP address: 192.168.116.132
	I0816 22:46:01.548260   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | skip adding static IP to network mk-newest-cni-20210816224431-6986 - found existing host DHCP lease matching {name: "newest-cni-20210816224431-6986", mac: "52:54:00:50:9a:fc", ip: "192.168.116.132"}
	I0816 22:46:01.548282   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Getting to WaitForSSH function...
	I0816 22:46:01.548306   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Waiting for SSH to be available...
	I0816 22:46:01.553811   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:01.554159   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:44:46 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:01.554194   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:01.554331   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Using SSH client type: external
	I0816 22:46:01.554364   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa (-rw-------)
	I0816 22:46:01.554403   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.116.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:46:01.554424   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | About to run SSH command:
	I0816 22:46:01.554441   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | exit 0
	I0816 22:46:13.711429   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | SSH cmd err, output: <nil>: 
	I0816 22:46:13.711794   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetConfigRaw
	I0816 22:46:13.712501   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetIP
	I0816 22:46:13.717700   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:13.718066   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:13.718095   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:13.718293   21514 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/config.json ...
	I0816 22:46:13.718450   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:46:13.718632   21514 machine.go:88] provisioning docker machine ...
	I0816 22:46:13.718660   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:46:13.718876   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetMachineName
	I0816 22:46:13.719024   21514 buildroot.go:166] provisioning hostname "newest-cni-20210816224431-6986"
	I0816 22:46:13.719050   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetMachineName
	I0816 22:46:13.719168   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:46:13.723341   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:13.723617   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:13.723646   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:13.723737   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:46:13.723862   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:46:13.724023   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:46:13.724126   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:46:13.724310   21514 main.go:130] libmachine: Using SSH client type: native
	I0816 22:46:13.724511   21514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.116.132 22 <nil> <nil>}
	I0816 22:46:13.724533   21514 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210816224431-6986 && echo "newest-cni-20210816224431-6986" | sudo tee /etc/hostname
	I0816 22:46:13.885851   21514 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210816224431-6986
	
	I0816 22:46:13.885885   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:46:13.891469   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:13.891848   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:13.891880   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:13.892054   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:46:13.892247   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:46:13.892430   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:46:13.892570   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:46:13.892725   21514 main.go:130] libmachine: Using SSH client type: native
	I0816 22:46:13.892855   21514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.116.132 22 <nil> <nil>}
	I0816 22:46:13.892874   21514 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210816224431-6986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210816224431-6986/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210816224431-6986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:46:14.016803   21514 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:46:14.016831   21514 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:46:14.016869   21514 buildroot.go:174] setting up certificates
	I0816 22:46:14.016878   21514 provision.go:83] configureAuth start
	I0816 22:46:14.016889   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetMachineName
	I0816 22:46:14.017148   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetIP
	I0816 22:46:14.022791   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.023127   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:14.023164   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.023264   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:46:14.027541   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.027828   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:14.027855   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.027932   21514 provision.go:138] copyHostCerts
	I0816 22:46:14.027991   21514 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:46:14.028000   21514 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:46:14.028065   21514 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:46:14.028165   21514 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:46:14.028176   21514 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:46:14.028210   21514 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:46:14.028271   21514 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:46:14.028281   21514 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:46:14.028307   21514 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
	I0816 22:46:14.028356   21514 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210816224431-6986 san=[192.168.116.132 192.168.116.132 localhost 127.0.0.1 minikube newest-cni-20210816224431-6986]
	I0816 22:46:14.209466   21514 provision.go:172] copyRemoteCerts
	I0816 22:46:14.209519   21514 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:46:14.209549   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:46:14.214515   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.214812   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:14.214840   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.215055   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:46:14.215206   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:46:14.215321   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:46:14.215401   21514 sshutil.go:53] new ssh client: &{IP:192.168.116.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa Username:docker}
	I0816 22:46:14.294212   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:46:14.310191   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0816 22:46:14.325847   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 22:46:14.341302   21514 provision.go:86] duration metric: configureAuth took 324.411074ms
	I0816 22:46:14.341324   21514 buildroot.go:189] setting minikube options for container-runtime
	I0816 22:46:14.341509   21514 config.go:177] Loaded profile config "newest-cni-20210816224431-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0816 22:46:14.341520   21514 machine.go:91] provisioned docker machine in 622.872994ms
	I0816 22:46:14.341536   21514 start.go:267] post-start starting for "newest-cni-20210816224431-6986" (driver="kvm2")
	I0816 22:46:14.341548   21514 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:46:14.341577   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:46:14.341895   21514 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:46:14.341934   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:46:14.346801   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.347136   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:14.347165   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.347297   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:46:14.347462   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:46:14.347601   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:46:14.347734   21514 sshutil.go:53] new ssh client: &{IP:192.168.116.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa Username:docker}
	I0816 22:46:14.426622   21514 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:46:14.431162   21514 info.go:137] Remote host: Buildroot 2020.02.12
	I0816 22:46:14.431180   21514 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:46:14.431237   21514 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:46:14.431327   21514 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0816 22:46:14.431435   21514 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:46:14.438129   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:46:14.454171   21514 start.go:270] post-start completed in 112.617965ms
	I0816 22:46:14.454221   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:46:14.454460   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:46:14.459422   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.459760   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:14.459783   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.459922   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:46:14.460089   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:46:14.460310   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:46:14.460450   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:46:14.460606   21514 main.go:130] libmachine: Using SSH client type: native
	I0816 22:46:14.460744   21514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.116.132 22 <nil> <nil>}
	I0816 22:46:14.460755   21514 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0816 22:46:14.567778   21514 main.go:130] libmachine: SSH cmd err, output: <nil>: 1629153974.479290825
	
	I0816 22:46:14.567798   21514 fix.go:212] guest clock: 1629153974.479290825
	I0816 22:46:14.567806   21514 fix.go:225] Guest: 2021-08-16 22:46:14.479290825 +0000 UTC Remote: 2021-08-16 22:46:14.454443208 +0000 UTC m=+13.541804922 (delta=24.847617ms)
	I0816 22:46:14.567855   21514 fix.go:196] guest clock delta is within tolerance: 24.847617ms
	I0816 22:46:14.567864   21514 fix.go:57] fixHost completed within 13.490659677s
	I0816 22:46:14.567870   21514 start.go:80] releasing machines lock for "newest-cni-20210816224431-6986", held for 13.490681925s
	I0816 22:46:14.567941   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:46:14.568190   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetIP
	I0816 22:46:14.572979   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.573270   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:14.573308   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.573419   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:46:14.573594   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:46:14.574005   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:46:14.574201   21514 ssh_runner.go:149] Run: systemctl --version
	I0816 22:46:14.574222   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:46:14.574279   21514 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:46:14.574314   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:46:14.580933   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.580958   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.581226   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:14.581260   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.581303   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:14.581321   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.581378   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:46:14.581467   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:46:14.581549   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:46:14.581625   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:46:14.581633   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:46:14.581738   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:46:14.581780   21514 sshutil.go:53] new ssh client: &{IP:192.168.116.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa Username:docker}
	I0816 22:46:14.581843   21514 sshutil.go:53] new ssh client: &{IP:192.168.116.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa Username:docker}
	I0816 22:46:14.660607   21514 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 22:46:14.660761   21514 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:46:18.660575   21514 ssh_runner.go:189] Completed: sudo crictl images --output json: (3.999784803s)
	I0816 22:46:18.660713   21514 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0816 22:46:18.660774   21514 ssh_runner.go:149] Run: which lz4
	I0816 22:46:18.664943   21514 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0816 22:46:18.669122   21514 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0816 22:46:18.669148   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (945588089 bytes)
	I0816 22:46:23.666976   21514 containerd.go:546] Took 5.002065 seconds to copy over tarball
	I0816 22:46:23.667039   21514 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 22:46:33.630408   21514 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (9.963347626s)
	I0816 22:46:33.630438   21514 containerd.go:553] Took 9.963432 seconds t extract the tarball
	I0816 22:46:33.630451   21514 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0816 22:46:33.697380   21514 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:46:33.872986   21514 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:46:33.925109   21514 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0816 22:46:34.408772   21514 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0816 22:46:34.422663   21514 docker.go:153] disabling docker service ...
	I0816 22:46:34.422719   21514 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:46:34.433892   21514 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:46:34.443063   21514 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:46:34.560029   21514 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:46:34.692430   21514 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:46:34.703585   21514 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:46:34.717772   21514 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0816 22:46:34.731601   21514 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:46:34.738627   21514 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:46:34.738708   21514 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:46:34.755558   21514 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:46:34.762000   21514 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:46:34.882126   21514 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:46:35.437447   21514 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0816 22:46:35.437510   21514 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:46:35.444999   21514 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0816 22:46:36.550448   21514 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:46:36.556120   21514 start.go:413] Will wait 60s for crictl version
	I0816 22:46:36.556169   21514 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:46:36.591005   21514 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0816 22:46:36.591064   21514 ssh_runner.go:149] Run: containerd --version
	I0816 22:46:36.624579   21514 ssh_runner.go:149] Run: containerd --version
	I0816 22:46:36.659274   21514 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on containerd 1.4.9 ...
	I0816 22:46:36.659313   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetIP
	I0816 22:46:36.664585   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:36.664982   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:36.665013   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:36.665215   21514 ssh_runner.go:149] Run: grep 192.168.116.1	host.minikube.internal$ /etc/hosts
	I0816 22:46:36.669325   21514 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.116.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:46:36.681235   21514 out.go:177]   - kubelet.network-plugin=cni
	I0816 22:46:36.682684   21514 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0816 22:46:36.682745   21514 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 22:46:36.682793   21514 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:46:36.716619   21514 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:46:36.716644   21514 containerd.go:517] Images already preloaded, skipping extraction
	I0816 22:46:36.716687   21514 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:46:36.750348   21514 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:46:36.750371   21514 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:46:36.750417   21514 ssh_runner.go:149] Run: sudo crictl info
	I0816 22:46:36.782094   21514 cni.go:93] Creating CNI manager for ""
	I0816 22:46:36.782123   21514 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:46:36.782142   21514 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0816 22:46:36.782161   21514 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.116.132 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210816224431-6986 NodeName:newest-cni-20210816224431-6986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.116.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true l
eader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.116.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:46:36.782392   21514 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.116.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20210816224431-6986"
	  kubeletExtraArgs:
	    node-ip: 192.168.116.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.116.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:46:36.782530   21514 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210816224431-6986 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.116.132 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816224431-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:46:36.782657   21514 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0816 22:46:36.789591   21514 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:46:36.789643   21514 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:46:36.796161   21514 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (589 bytes)
	I0816 22:46:36.807488   21514 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0816 22:46:36.818436   21514 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I0816 22:46:36.830115   21514 ssh_runner.go:149] Run: grep 192.168.116.132	control-plane.minikube.internal$ /etc/hosts
	I0816 22:46:36.833656   21514 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.116.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:46:36.842841   21514 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986 for IP: 192.168.116.132
	I0816 22:46:36.842880   21514 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:46:36.842897   21514 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:46:36.842957   21514 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/client.key
	I0816 22:46:36.842975   21514 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/apiserver.key.c979f591
	I0816 22:46:36.842990   21514 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/proxy-client.key
	I0816 22:46:36.843091   21514 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem (1338 bytes)
	W0816 22:46:36.843126   21514 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
	I0816 22:46:36.843135   21514 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 22:46:36.843158   21514 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:46:36.843185   21514 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:46:36.843209   21514 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1675 bytes)
	I0816 22:46:36.843255   21514 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:46:36.844157   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:46:36.860384   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 22:46:36.875586   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:46:36.891375   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 22:46:36.906497   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:46:36.921933   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 22:46:36.937186   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:46:36.952583   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 22:46:36.968121   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
	I0816 22:46:36.983172   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:46:36.998345   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
	I0816 22:46:37.013200   21514 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:46:37.024774   21514 ssh_runner.go:149] Run: openssl version
	I0816 22:46:37.030223   21514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
	I0816 22:46:37.038049   21514 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/69862.pem
	I0816 22:46:37.042253   21514 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:49 /usr/share/ca-certificates/69862.pem
	I0816 22:46:37.042297   21514 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
	I0816 22:46:37.047884   21514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:46:37.055065   21514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:46:37.063538   21514 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:46:37.068408   21514 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:42 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:46:37.068450   21514 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:46:37.074223   21514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:46:37.082919   21514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
	I0816 22:46:37.090259   21514 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6986.pem
	I0816 22:46:37.094833   21514 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:49 /usr/share/ca-certificates/6986.pem
	I0816 22:46:37.094872   21514 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
	I0816 22:46:37.100236   21514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
	I0816 22:46:37.108106   21514 kubeadm.go:390] StartCluster: {Name:newest-cni-20210816224431-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0
-rc.0 ClusterName:newest-cni-20210816224431-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.116.132 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fak
e.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:46:37.108196   21514 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0816 22:46:37.108258   21514 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:46:37.142604   21514 cri.go:76] found id: ""
	I0816 22:46:37.142658   21514 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:46:37.149935   21514 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:46:37.149952   21514 kubeadm.go:600] restartCluster start
	I0816 22:46:37.149986   21514 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:46:37.157345   21514 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:37.157908   21514 kubeconfig.go:117] verify returned: extract IP: "newest-cni-20210816224431-6986" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:46:37.157990   21514 kubeconfig.go:128] "newest-cni-20210816224431-6986" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:46:37.158255   21514 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:46:37.160402   21514 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:46:37.166131   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:37.166167   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:37.174523   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:37.374936   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:37.375017   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:37.385099   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:37.575393   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:37.575481   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:37.584871   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:37.775155   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:37.775234   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:37.784662   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:37.975008   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:37.975070   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:37.984653   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:38.174972   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:38.175057   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:38.184448   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:38.374665   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:38.374747   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:38.384299   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:38.575587   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:38.575663   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:38.585029   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:38.775335   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:38.775415   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:38.785693   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:38.975002   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:38.975078   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:38.984778   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:39.175076   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:39.175149   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:39.184610   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:39.374892   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:39.374971   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:39.383974   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:39.575317   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:39.575390   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:39.584949   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:39.775268   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:39.775339   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:39.784800   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:39.975121   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:39.975206   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:39.984672   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:40.174992   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:40.175059   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:40.184610   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:40.184626   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:40.184661   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:40.192955   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:40.192971   21514 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:46:40.192979   21514 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:46:40.192993   21514 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:46:40.193033   21514 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:46:40.225257   21514 cri.go:76] found id: ""
	I0816 22:46:40.225327   21514 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:46:40.239043   21514 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:46:40.245591   21514 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:46:40.245637   21514 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:46:40.252666   21514 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:46:40.252681   21514 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:46:40.376975   21514 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:46:41.109564   21514 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:46:41.324369   21514 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:46:41.424440   21514 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:46:41.493471   21514 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:46:41.493538   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:42.006807   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:42.506383   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:43.006831   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:43.506171   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:44.006594   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:44.506170   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:45.006751   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:45.507085   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:46.006629   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:46.506151   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:47.006760   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:47.506357   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:48.006799   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:48.507144   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:49.006141   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:49.506735   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:50.007145   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:50.507149   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:51.007177   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:51.507155   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:52.006911   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:52.507067   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:53.007012   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:53.506901   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:54.006982   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:54.507107   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:55.006520   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:55.506203   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:56.006185   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:56.507184   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:57.006266   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:57.506538   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:58.006123   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:58.506228   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:59.006844   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:59.506533   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:47:00.006633   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:47:00.506323   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:47:01.007120   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:47:01.506361   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:47:02.006231   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:47:02.506994   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:47:03.006810   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:47:03.017529   21514 api_server.go:70] duration metric: took 21.524057535s to wait for apiserver process to appear ...
	I0816 22:47:03.017553   21514 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:47:03.017565   21514 api_server.go:239] Checking apiserver healthz at https://192.168.116.132:8443/healthz ...
	I0816 22:47:08.018341   21514 api_server.go:255] stopped: https://192.168.116.132:8443/healthz: Get "https://192.168.116.132:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 22:47:08.519155   21514 api_server.go:239] Checking apiserver healthz at https://192.168.116.132:8443/healthz ...
	I0816 22:47:13.520067   21514 api_server.go:255] stopped: https://192.168.116.132:8443/healthz: Get "https://192.168.116.132:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 22:47:14.018618   21514 api_server.go:239] Checking apiserver healthz at https://192.168.116.132:8443/healthz ...
	I0816 22:47:15.621567   21514 api_server.go:265] https://192.168.116.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:47:15.621601   21514 api_server.go:101] status: https://192.168.116.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:47:16.019288   21514 api_server.go:239] Checking apiserver healthz at https://192.168.116.132:8443/healthz ...
	I0816 22:47:16.033451   21514 api_server.go:265] https://192.168.116.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:47:16.033488   21514 api_server.go:101] status: https://192.168.116.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:47:16.518822   21514 api_server.go:239] Checking apiserver healthz at https://192.168.116.132:8443/healthz ...
	I0816 22:47:16.533617   21514 api_server.go:265] https://192.168.116.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:47:16.533647   21514 api_server.go:101] status: https://192.168.116.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:47:17.019266   21514 api_server.go:239] Checking apiserver healthz at https://192.168.116.132:8443/healthz ...
	I0816 22:47:17.027983   21514 api_server.go:265] https://192.168.116.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:47:17.028007   21514 api_server.go:101] status: https://192.168.116.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:47:17.518677   21514 api_server.go:239] Checking apiserver healthz at https://192.168.116.132:8443/healthz ...
	I0816 22:47:17.525408   21514 api_server.go:265] https://192.168.116.132:8443/healthz returned 200:
	ok
	I0816 22:47:17.532401   21514 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:47:17.532422   21514 api_server.go:129] duration metric: took 14.514863365s to wait for apiserver health ...
	I0816 22:47:17.532434   21514 cni.go:93] Creating CNI manager for ""
	I0816 22:47:17.532443   21514 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:47:17.534483   21514 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:47:17.534552   21514 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:47:17.541855   21514 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:47:17.554903   21514 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:47:17.569288   21514 system_pods.go:59] 9 kube-system pods found
	I0816 22:47:17.569339   21514 system_pods.go:61] "coredns-78fcd69978-749xf" [6208ac25-3942-4cd5-92a0-d06ca299a035] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:47:17.569354   21514 system_pods.go:61] "coredns-78fcd69978-wrcr7" [b15084b1-b422-4c47-88f7-4d530bd3bac6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:47:17.569363   21514 system_pods.go:61] "etcd-newest-cni-20210816224431-6986" [7c7af59d-474f-4e1b-ac73-7be825aca15a] Running
	I0816 22:47:17.569372   21514 system_pods.go:61] "kube-apiserver-newest-cni-20210816224431-6986" [79a8398a-a4c7-4df7-b7f3-19fc26e17bc2] Running
	I0816 22:47:17.569383   21514 system_pods.go:61] "kube-controller-manager-newest-cni-20210816224431-6986" [1c11c512-e7d6-4771-b3f9-61a2bae963f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 22:47:17.569391   21514 system_pods.go:61] "kube-proxy-w6x94" [e29d54cf-57fb-4827-b095-c860cd21c86b] Running
	I0816 22:47:17.569398   21514 system_pods.go:61] "kube-scheduler-newest-cni-20210816224431-6986" [23dd165d-31cf-45d5-82e7-ffdf6488e24f] Running
	I0816 22:47:17.569404   21514 system_pods.go:61] "metrics-server-7c784ccb57-67p7g" [3a4df2ef-ed07-404d-8363-6ce11af1d8db] Pending
	I0816 22:47:17.569412   21514 system_pods.go:61] "storage-provisioner" [33a4ad67-b26f-4f7c-911b-c995de029df2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:47:17.569418   21514 system_pods.go:74] duration metric: took 14.501421ms to wait for pod list to return data ...
	I0816 22:47:17.569427   21514 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:47:17.573647   21514 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:47:17.573678   21514 node_conditions.go:123] node cpu capacity is 2
	I0816 22:47:17.573694   21514 node_conditions.go:105] duration metric: took 4.261615ms to run NodePressure ...
	I0816 22:47:17.573710   21514 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:47:17.955018   21514 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:47:17.965698   21514 ops.go:34] apiserver oom_adj: -16
	I0816 22:47:17.965718   21514 kubeadm.go:604] restartCluster took 40.81576111s
	I0816 22:47:17.965728   21514 kubeadm.go:392] StartCluster complete in 40.857628318s
	I0816 22:47:17.965758   21514 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:47:17.965867   21514 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:47:17.966454   21514 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:47:17.973348   21514 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210816224431-6986" rescaled to 1
	I0816 22:47:17.973401   21514 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.116.132 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0816 22:47:17.973417   21514 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:47:17.975259   21514 out.go:177] * Verifying Kubernetes components...
	I0816 22:47:17.975310   21514 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:47:17.973431   21514 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:47:17.975412   21514 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210816224431-6986"
	I0816 22:47:17.975431   21514 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210816224431-6986"
	W0816 22:47:17.975440   21514 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:47:17.975447   21514 addons.go:59] Setting dashboard=true in profile "newest-cni-20210816224431-6986"
	I0816 22:47:17.975467   21514 host.go:66] Checking if "newest-cni-20210816224431-6986" exists ...
	I0816 22:47:17.975470   21514 addons.go:135] Setting addon dashboard=true in "newest-cni-20210816224431-6986"
	W0816 22:47:17.975477   21514 addons.go:147] addon dashboard should already be in state true
	I0816 22:47:17.975477   21514 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210816224431-6986"
	I0816 22:47:17.975489   21514 addons.go:59] Setting metrics-server=true in profile "newest-cni-20210816224431-6986"
	I0816 22:47:17.975516   21514 addons.go:135] Setting addon metrics-server=true in "newest-cni-20210816224431-6986"
	W0816 22:47:17.975539   21514 addons.go:147] addon metrics-server should already be in state true
	I0816 22:47:17.975570   21514 host.go:66] Checking if "newest-cni-20210816224431-6986" exists ...
	I0816 22:47:17.975495   21514 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210816224431-6986"
	I0816 22:47:17.975948   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:47:17.975984   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:47:17.975496   21514 host.go:66] Checking if "newest-cni-20210816224431-6986" exists ...
	I0816 22:47:17.976064   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:47:17.976093   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:47:17.976249   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:47:17.976274   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:47:17.976449   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:47:17.973649   21514 config.go:177] Loaded profile config "newest-cni-20210816224431-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0816 22:47:17.976531   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:47:17.987685   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38235
	I0816 22:47:17.988132   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:47:17.988651   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:47:17.988673   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:47:17.989025   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:47:17.989564   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:47:17.989602   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:47:17.995295   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46003
	I0816 22:47:17.995521   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44273
	I0816 22:47:17.995761   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:47:17.995867   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:47:17.995901   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33445
	I0816 22:47:17.996237   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:47:17.996256   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:47:17.996292   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:47:17.996371   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:47:17.996387   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:47:17.996762   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:47:17.996785   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:47:17.996809   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:47:17.996953   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:47:17.997103   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:47:17.997121   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetState
	I0816 22:47:17.997436   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:47:17.997472   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:47:17.997636   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:47:17.997676   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:47:18.009519   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43111
	I0816 22:47:18.009973   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:47:18.010466   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:47:18.010489   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:47:18.010600   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45397
	I0816 22:47:18.010852   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:47:18.010968   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:47:18.011155   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetState
	I0816 22:47:18.011525   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:47:18.011550   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:47:18.011909   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:47:18.012071   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetState
	I0816 22:47:18.012254   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33005
	I0816 22:47:18.012617   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:47:18.013055   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:47:18.013080   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:47:18.013616   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:47:18.013799   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetState
	I0816 22:47:18.014497   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:47:18.017145   21514 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:47:18.017198   21514 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:47:18.017211   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:47:18.015647   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:47:18.017230   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:47:18.015984   21514 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210816224431-6986"
	W0816 22:47:18.017291   21514 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:47:18.017324   21514 host.go:66] Checking if "newest-cni-20210816224431-6986" exists ...
	I0816 22:47:18.016995   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:47:18.019081   21514 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:47:18.020492   21514 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:47:18.017738   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:47:18.020558   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:47:18.019187   21514 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:47:18.021940   21514 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:47:18.021942   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:47:18.021977   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:47:18.021992   21514 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:47:18.022002   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:47:18.022020   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:47:18.022830   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:47:18.023389   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:47:18.023420   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:47:18.023567   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:47:18.023730   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:47:18.023875   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:47:18.024012   21514 sshutil.go:53] new ssh client: &{IP:192.168.116.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa Username:docker}
	I0816 22:47:18.029078   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:47:18.029458   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:47:18.029486   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:47:18.029669   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:47:18.029822   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:47:18.029842   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:47:18.030003   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:47:18.030151   21514 sshutil.go:53] new ssh client: &{IP:192.168.116.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa Username:docker}
	I0816 22:47:18.030183   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:47:18.030222   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:47:18.030364   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:47:18.030525   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:47:18.030674   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:47:18.030808   21514 sshutil.go:53] new ssh client: &{IP:192.168.116.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa Username:docker}
	I0816 22:47:18.033388   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41481
	I0816 22:47:18.033762   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:47:18.034165   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:47:18.034185   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:47:18.034539   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:47:18.035111   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:47:18.035158   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:47:18.046605   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35131
	I0816 22:47:18.047012   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:47:18.047466   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:47:18.047491   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:47:18.047805   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:47:18.047975   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetState
	I0816 22:47:18.051135   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:47:18.051335   21514 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:47:18.051349   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:47:18.051367   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:47:18.056714   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:47:18.057123   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:47:18.057147   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:47:18.057265   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:47:18.057415   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:47:18.057549   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:47:18.057683   21514 sshutil.go:53] new ssh client: &{IP:192.168.116.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa Username:docker}
	I0816 22:47:18.142562   21514 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:47:18.142590   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:47:18.162118   21514 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:47:18.162138   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:47:18.193952   21514 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:47:18.193976   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:47:18.220623   21514 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:47:18.278582   21514 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:47:18.281519   21514 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:47:18.281537   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:47:18.288547   21514 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:47:18.288568   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:47:18.288774   21514 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:47:18.288825   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:47:18.290814   21514 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0816 22:47:18.298864   21514 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:47:18.298884   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:47:18.328916   21514 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:47:18.335491   21514 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:47:18.335511   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:47:18.401871   21514 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:47:18.401895   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:47:18.478224   21514 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:47:18.478261   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:47:18.573217   21514 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:47:18.573256   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:47:18.607532   21514 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:47:18.607561   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:47:18.648234   21514 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:47:18.648264   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:47:18.677095   21514 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:47:18.695961   21514 main.go:130] libmachine: Making call to close driver server
	I0816 22:47:18.695994   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Close
	I0816 22:47:18.696259   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Closing plugin on server side
	I0816 22:47:18.696264   21514 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:47:18.696282   21514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:47:18.696292   21514 main.go:130] libmachine: Making call to close driver server
	I0816 22:47:18.696302   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Close
	I0816 22:47:18.696523   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Closing plugin on server side
	I0816 22:47:18.696546   21514 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:47:18.696578   21514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:47:18.696599   21514 main.go:130] libmachine: Making call to close driver server
	I0816 22:47:18.696611   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Close
	I0816 22:47:18.696845   21514 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:47:18.696860   21514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:47:18.966627   21514 api_server.go:70] duration metric: took 993.191413ms to wait for apiserver process to appear ...
	I0816 22:47:18.966655   21514 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:47:18.966664   21514 api_server.go:239] Checking apiserver healthz at https://192.168.116.132:8443/healthz ...
	I0816 22:47:18.966631   21514 main.go:130] libmachine: Making call to close driver server
	I0816 22:47:18.966725   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Close
	I0816 22:47:18.967012   21514 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:47:18.967047   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Closing plugin on server side
	I0816 22:47:18.967057   21514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:47:18.967076   21514 main.go:130] libmachine: Making call to close driver server
	I0816 22:47:18.967090   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Close
	I0816 22:47:18.967309   21514 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:47:18.967328   21514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:47:18.974225   21514 api_server.go:265] https://192.168.116.132:8443/healthz returned 200:
	ok
	I0816 22:47:18.976413   21514 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:47:18.976436   21514 api_server.go:129] duration metric: took 9.774611ms to wait for apiserver health ...
	I0816 22:47:18.976447   21514 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:47:18.983223   21514 system_pods.go:59] 9 kube-system pods found
	I0816 22:47:18.983246   21514 system_pods.go:61] "coredns-78fcd69978-749xf" [6208ac25-3942-4cd5-92a0-d06ca299a035] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:47:18.983254   21514 system_pods.go:61] "coredns-78fcd69978-wrcr7" [b15084b1-b422-4c47-88f7-4d530bd3bac6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:47:18.983261   21514 system_pods.go:61] "etcd-newest-cni-20210816224431-6986" [7c7af59d-474f-4e1b-ac73-7be825aca15a] Running
	I0816 22:47:18.983279   21514 system_pods.go:61] "kube-apiserver-newest-cni-20210816224431-6986" [79a8398a-a4c7-4df7-b7f3-19fc26e17bc2] Running
	I0816 22:47:18.983286   21514 system_pods.go:61] "kube-controller-manager-newest-cni-20210816224431-6986" [1c11c512-e7d6-4771-b3f9-61a2bae963f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 22:47:18.983295   21514 system_pods.go:61] "kube-proxy-w6x94" [e29d54cf-57fb-4827-b095-c860cd21c86b] Running
	I0816 22:47:18.983302   21514 system_pods.go:61] "kube-scheduler-newest-cni-20210816224431-6986" [23dd165d-31cf-45d5-82e7-ffdf6488e24f] Running
	I0816 22:47:18.983306   21514 system_pods.go:61] "metrics-server-7c784ccb57-67p7g" [3a4df2ef-ed07-404d-8363-6ce11af1d8db] Pending
	I0816 22:47:18.983314   21514 system_pods.go:61] "storage-provisioner" [33a4ad67-b26f-4f7c-911b-c995de029df2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:47:18.983323   21514 system_pods.go:74] duration metric: took 6.870298ms to wait for pod list to return data ...
	I0816 22:47:18.983333   21514 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:47:18.995359   21514 default_sa.go:45] found service account: "default"
	I0816 22:47:18.995376   21514 default_sa.go:55] duration metric: took 12.037971ms for default service account to be created ...
	I0816 22:47:18.995384   21514 kubeadm.go:547] duration metric: took 1.02195268s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0816 22:47:18.995404   21514 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:47:19.001993   21514 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:47:19.002015   21514 node_conditions.go:123] node cpu capacity is 2
	I0816 22:47:19.002026   21514 node_conditions.go:105] duration metric: took 6.618379ms to run NodePressure ...
	I0816 22:47:19.002035   21514 start.go:231] waiting for startup goroutines ...
	I0816 22:47:19.009785   21514 main.go:130] libmachine: Making call to close driver server
	I0816 22:47:19.009820   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Close
	I0816 22:47:19.010068   21514 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:47:19.010113   21514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:47:19.010131   21514 main.go:130] libmachine: Making call to close driver server
	I0816 22:47:19.010147   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Close
	I0816 22:47:19.010415   21514 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:47:19.010434   21514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:47:19.010445   21514 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210816224431-6986"
	I0816 22:47:19.010423   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Closing plugin on server side
	I0816 22:47:19.343310   21514 main.go:130] libmachine: Making call to close driver server
	I0816 22:47:19.343342   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Close
	I0816 22:47:19.343593   21514 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:47:19.343611   21514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:47:19.343622   21514 main.go:130] libmachine: Making call to close driver server
	I0816 22:47:19.343633   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Close
	I0816 22:47:19.343645   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Closing plugin on server side
	I0816 22:47:19.343872   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Closing plugin on server side
	I0816 22:47:19.343903   21514 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:47:19.343915   21514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:47:19.345775   21514 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0816 22:47:19.345798   21514 addons.go:344] enableAddons completed in 1.372371026s
	I0816 22:47:19.389898   21514 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0816 22:47:19.391727   21514 out.go:177] 
	W0816 22:47:19.391864   21514 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0816 22:47:19.393267   21514 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0816 22:47:19.394806   21514 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210816224431-6986" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	5a21dc88c9d42       cf9cba6c3e4a8       6 seconds ago       Running             kube-controller-manager   2                   ca0f8c8c87908
	02ee26a5c84a1       0048118155842       13 seconds ago      Running             etcd                      1                   3c541e17d1ed2
	148f9e5acb809       b2462aa94d403       20 seconds ago      Running             kube-apiserver            1                   8e4331d6b4771
	22afe5aae0f35       7da2efaa5b480       21 seconds ago      Running             kube-scheduler            1                   29ad7d9cab126
	7b31f609984fc       cf9cba6c3e4a8       23 seconds ago      Exited              kube-controller-manager   1                   ca0f8c8c87908
	
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2021-08-16 22:46:11 UTC, end at Mon 2021-08-16 22:47:22 UTC. --
	Aug 16 22:47:02 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:02.305636545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-newest-cni-20210816224431-6986,Uid:7e2f77f67b13e20a69cee33bb38b10ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5\""
	Aug 16 22:47:02 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:02.311758117Z" level=info msg="CreateContainer within sandbox \"8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:1,}"
	Aug 16 22:47:02 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:02.352003599Z" level=info msg="CreateContainer within sandbox \"8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:1,} returns container id \"148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019\""
	Aug 16 22:47:02 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:02.352991320Z" level=info msg="StartContainer for \"148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019\""
	Aug 16 22:47:02 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:02.718715209Z" level=info msg="StartContainer for \"148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019\" returns successfully"
	Aug 16 22:47:02 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:02.949062330Z" level=info msg="PullImage \"k8s.gcr.io/etcd:3.5.0-0\""
	Aug 16 22:47:08 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:08.913078583Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/etcd:3.5.0-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 16 22:47:08 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:08.918063001Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 16 22:47:08 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:08.922031349Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/etcd:3.5.0-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 16 22:47:08 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:08.925798579Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 16 22:47:08 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:08.926163295Z" level=info msg="PullImage \"k8s.gcr.io/etcd:3.5.0-0\" returns image reference \"sha256:0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba\""
	Aug 16 22:47:08 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:08.930655779Z" level=info msg="CreateContainer within sandbox \"3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0\" for container &ContainerMetadata{Name:etcd,Attempt:1,}"
	Aug 16 22:47:09 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:09.245071482Z" level=info msg="CreateContainer within sandbox \"3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0\" for &ContainerMetadata{Name:etcd,Attempt:1,} returns container id \"02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9\""
	Aug 16 22:47:09 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:09.246636133Z" level=info msg="StartContainer for \"02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9\""
	Aug 16 22:47:09 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:09.604805365Z" level=info msg="StartContainer for \"02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9\" returns successfully"
	Aug 16 22:47:15 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:15.680070890Z" level=info msg="Finish piping stderr of container \"7b31f609984fc54447e9df1259fba6a55768fcd53cfb0dc5bcfd7400326bdf1c\""
	Aug 16 22:47:15 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:15.683084322Z" level=info msg="TaskExit event &TaskExit{ContainerID:7b31f609984fc54447e9df1259fba6a55768fcd53cfb0dc5bcfd7400326bdf1c,ID:7b31f609984fc54447e9df1259fba6a55768fcd53cfb0dc5bcfd7400326bdf1c,Pid:2564,ExitStatus:255,ExitedAt:2021-08-16 22:47:15.682799946 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:47:15 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:15.683165531Z" level=info msg="Finish piping stdout of container \"7b31f609984fc54447e9df1259fba6a55768fcd53cfb0dc5bcfd7400326bdf1c\""
	Aug 16 22:47:15 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:15.791905476Z" level=info msg="shim disconnected" id=7b31f609984fc54447e9df1259fba6a55768fcd53cfb0dc5bcfd7400326bdf1c
	Aug 16 22:47:15 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:15.792052451Z" level=error msg="copy shim log" error="read /proc/self/fd/28: file already closed"
	Aug 16 22:47:15 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:15.812648180Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Aug 16 22:47:16 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:16.035371732Z" level=info msg="CreateContainer within sandbox \"ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}"
	Aug 16 22:47:16 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:16.084897532Z" level=info msg="CreateContainer within sandbox \"ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217\""
	Aug 16 22:47:16 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:16.085865713Z" level=info msg="StartContainer for \"5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217\""
	Aug 16 22:47:16 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:16.699380721Z" level=info msg="StartContainer for \"5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Aug16 22:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.097101] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.794089] Unstable clock detected, switching default tracing clock to "global"
	              If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000018] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.426870] systemd-fstab-generator[1163]: Ignoring "noauto" for root device
	[  +0.033219] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.885094] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1734 comm=systemd-network
	[  +0.685985] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.430130] vboxguest: loading out-of-tree module taints kernel.
	[  +0.009638] vboxguest: PCI device not found, probably running on physical hardware.
	[ +20.659327] systemd-fstab-generator[2067]: Ignoring "noauto" for root device
	[  +0.697594] systemd-fstab-generator[2100]: Ignoring "noauto" for root device
	[  +0.120017] systemd-fstab-generator[2113]: Ignoring "noauto" for root device
	[  +0.200847] systemd-fstab-generator[2143]: Ignoring "noauto" for root device
	[  +6.430488] systemd-fstab-generator[2334]: Ignoring "noauto" for root device
	[Aug16 22:47] systemd-fstab-generator[2983]: Ignoring "noauto" for root device
	[  +0.682253] systemd-fstab-generator[3037]: Ignoring "noauto" for root device
	[  +0.890755] systemd-fstab-generator[3091]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9] <==
	* {"level":"info","ts":"2021-08-16T22:47:09.668Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2021-08-16T22:47:09.671Z","caller":"etcdserver/server.go:834","msg":"starting etcd server","local-member-id":"b83c2c8e708b08c3","local-server-version":"3.5.0","cluster-id":"4b377c4094c7e6d4","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-16T22:47:09.671Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"b83c2c8e708b08c3","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2021-08-16T22:47:09.672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b83c2c8e708b08c3 switched to configuration voters=(13275534791866517699)"}
	{"level":"info","ts":"2021-08-16T22:47:09.673Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"4b377c4094c7e6d4","local-member-id":"b83c2c8e708b08c3","added-peer-id":"b83c2c8e708b08c3","added-peer-peer-urls":["https://192.168.116.132:2380"]}
	{"level":"info","ts":"2021-08-16T22:47:09.673Z","caller":"membership/cluster.go:523","msg":"updated cluster version","cluster-id":"4b377c4094c7e6d4","local-member-id":"b83c2c8e708b08c3","from":"3.5","to":"3.5"}
	{"level":"info","ts":"2021-08-16T22:47:09.681Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-08-16T22:47:09.681Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b83c2c8e708b08c3","initial-advertise-peer-urls":["https://192.168.116.132:2380"],"listen-peer-urls":["https://192.168.116.132:2380"],"advertise-client-urls":["https://192.168.116.132:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.116.132:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-08-16T22:47:09.681Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-08-16T22:47:09.682Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.116.132:2380"}
	{"level":"info","ts":"2021-08-16T22:47:09.682Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.116.132:2380"}
	{"level":"info","ts":"2021-08-16T22:47:09.760Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b83c2c8e708b08c3 is starting a new election at term 2"}
	{"level":"info","ts":"2021-08-16T22:47:09.760Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b83c2c8e708b08c3 became pre-candidate at term 2"}
	{"level":"info","ts":"2021-08-16T22:47:09.761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b83c2c8e708b08c3 received MsgPreVoteResp from b83c2c8e708b08c3 at term 2"}
	{"level":"info","ts":"2021-08-16T22:47:09.761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b83c2c8e708b08c3 became candidate at term 3"}
	{"level":"info","ts":"2021-08-16T22:47:09.761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b83c2c8e708b08c3 received MsgVoteResp from b83c2c8e708b08c3 at term 3"}
	{"level":"info","ts":"2021-08-16T22:47:09.761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b83c2c8e708b08c3 became leader at term 3"}
	{"level":"info","ts":"2021-08-16T22:47:09.761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b83c2c8e708b08c3 elected leader b83c2c8e708b08c3 at term 3"}
	{"level":"info","ts":"2021-08-16T22:47:09.761Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b83c2c8e708b08c3","local-member-attributes":"{Name:newest-cni-20210816224431-6986 ClientURLs:[https://192.168.116.132:2379]}","request-path":"/0/members/b83c2c8e708b08c3/attributes","cluster-id":"4b377c4094c7e6d4","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-16T22:47:09.762Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-16T22:47:09.763Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-16T22:47:09.764Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-08-16T22:47:09.765Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-16T22:47:09.765Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-16T22:47:09.766Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.116.132:2379"}
	
	* 
	* ==> kernel <==
	*  22:48:02 up 1 min,  0 users,  load average: 0.97, 0.40, 0.15
	Linux newest-cni-20210816224431-6986 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019] <==
	* I0816 22:47:15.648945       1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0816 22:47:15.649383       1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0816 22:47:15.650190       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0816 22:47:15.650537       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	I0816 22:47:15.650712       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0816 22:47:15.708581       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0816 22:47:15.796624       1 apf_controller.go:304] Running API Priority and Fairness config worker
	I0816 22:47:15.797178       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0816 22:47:15.798657       1 cache.go:39] Caches are synced for autoregister controller
	I0816 22:47:15.800124       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0816 22:47:15.801659       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 22:47:15.810550       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 22:47:16.545183       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0816 22:47:16.677027       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0816 22:47:16.677145       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	W0816 22:47:17.677600       1 handler_proxy.go:104] no RequestInfo found in the context
	E0816 22:47:17.677766       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0816 22:47:17.677790       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 22:47:17.793677       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0816 22:47:17.820427       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0816 22:47:17.897189       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0816 22:47:17.921165       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 22:47:17.937108       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 22:47:19.083662       1 controller.go:611] quota admission added evaluator for: namespaces
	
	* 
	* ==> kube-controller-manager [5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217] <==
	* I0816 22:47:19.412566       1 controllermanager.go:577] Started "endpointslice"
	I0816 22:47:19.413191       1 endpointslice_controller.go:257] Starting endpoint slice controller
	I0816 22:47:19.413416       1 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
	I0816 22:47:19.422003       1 controllermanager.go:577] Started "pvc-protection"
	I0816 22:47:19.422818       1 pvc_protection_controller.go:110] "Starting PVC protection controller"
	I0816 22:47:19.424679       1 shared_informer.go:240] Waiting for caches to sync for PVC protection
	I0816 22:47:19.437520       1 controllermanager.go:577] Started "root-ca-cert-publisher"
	I0816 22:47:19.437628       1 publisher.go:107] Starting root CA certificate configmap publisher
	I0816 22:47:19.440012       1 shared_informer.go:240] Waiting for caches to sync for crt configmap
	I0816 22:47:19.445576       1 controllermanager.go:577] Started "podgc"
	I0816 22:47:19.447655       1 gc_controller.go:89] Starting GC controller
	I0816 22:47:19.447827       1 shared_informer.go:240] Waiting for caches to sync for GC
	I0816 22:47:19.451635       1 controllermanager.go:577] Started "bootstrapsigner"
	I0816 22:47:19.451851       1 shared_informer.go:240] Waiting for caches to sync for bootstrap_signer
	I0816 22:47:19.458074       1 controllermanager.go:577] Started "tokencleaner"
	I0816 22:47:19.458325       1 tokencleaner.go:118] Starting token cleaner controller
	I0816 22:47:19.458426       1 shared_informer.go:240] Waiting for caches to sync for token_cleaner
	I0816 22:47:19.458506       1 shared_informer.go:247] Caches are synced for token_cleaner 
	I0816 22:47:19.464661       1 replica_set.go:186] Starting replicaset controller
	I0816 22:47:19.464772       1 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
	I0816 22:47:19.464573       1 controllermanager.go:577] Started "replicaset"
	I0816 22:47:19.469984       1 controllermanager.go:577] Started "statefulset"
	I0816 22:47:19.470051       1 stateful_set.go:148] Starting stateful set controller
	I0816 22:47:19.470609       1 shared_informer.go:240] Waiting for caches to sync for stateful set
	I0816 22:47:19.473848       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-controller-manager [7b31f609984fc54447e9df1259fba6a55768fcd53cfb0dc5bcfd7400326bdf1c] <==
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:159 +0x2b1
	
	goroutine 152 [select]:
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext(0x51d54d0, 0xc0006fcec0, 0xc000a80630, 0xc00067ea40, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:655 +0x109
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x51d54d0, 0xc0006fcec0, 0xc000a80601, 0xc000a80630, 0xc00067ea40, 0x436b220, 0xc00067ea40)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:591 +0xa5
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext(0x51d54d0, 0xc0006fcec0, 0xdf8475800, 0xc00067ea40, 0xc0000c4660, 0x4c63ab8)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:542 +0x65
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xdf8475800, 0xc00067ea20, 0xc000116360, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:533 +0xa5
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:162 +0x328
	
	goroutine 153 [select]:
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1(0xc000116360, 0xc00067ea30, 0x51d54d0, 0xc0006fcec0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:298 +0x87
	created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:297 +0x8c
	
	goroutine 154 [select]:
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1.1(0xc0000c4d20, 0xdf8475800, 0x0, 0x51d54d0, 0xc0006fcf00)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:705 +0x156
	created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:688 +0x96
	
	* 
	* ==> kube-scheduler [22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a] <==
	* I0816 22:47:14.196190       1 trace.go:205] Trace[1250961944]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:47:04.186) (total time: 10009ms):
	Trace[1250961944]: [10.009604348s] [10.009604348s] END
	E0816 22:47:14.196906       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.116.132:8443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	I0816 22:47:14.225698       1 trace.go:205] Trace[1888648226]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:47:04.223) (total time: 10001ms):
	Trace[1888648226]: [10.00178459s] [10.00178459s] END
	E0816 22:47:14.226144       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.116.132:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	I0816 22:47:14.232661       1 trace.go:205] Trace[299216213]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:47:04.230) (total time: 10002ms):
	Trace[299216213]: [10.002064048s] [10.002064048s] END
	E0816 22:47:14.233026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.116.132:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	I0816 22:47:14.242115       1 trace.go:205] Trace[1298987771]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:47:04.241) (total time: 10000ms):
	Trace[1298987771]: [10.000996775s] [10.000996775s] END
	E0816 22:47:14.242652       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.116.132:8443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0816 22:47:15.542724       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:47:15.551260       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:47:15.588669       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0816 22:47:16.004760       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0816 22:47:16.275306       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0816 22:47:16.276490       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0816 22:47:16.276942       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0816 22:47:16.277332       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0816 22:47:16.277957       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0816 22:47:16.278418       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0816 22:47:16.280622       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0816 22:47:16.281040       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0816 22:47:16.281599       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:46:11 UTC, end at Mon 2021-08-16 22:48:03 UTC. --
	Aug 16 22:47:17 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:17.780135    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:17 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:17.880859    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:17 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:17.981375    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:18 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:18.081941    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:18 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:18.190922    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:18 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:18.292099    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:18 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:18.392347    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:18 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:18.507737    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:18 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:18.608137    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:18 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:18.709364    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:18 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:18.810553    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:18 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:18.910773    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:19 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:19.011642    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:19 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:19.112917    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:19 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:19.215985    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:19 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:19.334195    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:19 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:19.435413    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:19 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:19.536558    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:19 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:19.637798    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:19 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:19.738584    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:19 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:19.839822    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:20 newest-cni-20210816224431-6986 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:47:20 newest-cni-20210816224431-6986 kubelet[2342]: I0816 22:47:20.216266    2342 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 16 22:47:20 newest-cni-20210816224431-6986 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:47:20 newest-cni-20210816224431-6986 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:48:02.851751   22076 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:240: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210816224431-6986 -n newest-cni-20210816224431-6986
helpers_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210816224431-6986 -n newest-cni-20210816224431-6986: exit status 2 (248.011329ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:240: status error: exit status 2 (may be ok)
helpers_test.go:245: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:246: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-20210816224431-6986 logs -n 25
E0816 22:48:13.448143    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/flannel-20210816222225-6986/client.crt: no such file or directory
E0816 22:48:20.861483    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816222225-6986/client.crt: no such file or directory
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 -p newest-cni-20210816224431-6986 logs -n 25: exit status 110 (40.938501281s)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   | Version |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	| addons  | enable dashboard -p                                        | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:36:55 UTC |
	|         | embed-certs-20210816223333-6986                            |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| stop    | -p                                                         | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:35:52 UTC | Mon, 16 Aug 2021 22:37:25 UTC |
	|         | default-k8s-different-port-20210816223418-6986             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:37:25 UTC | Mon, 16 Aug 2021 22:37:25 UTC |
	|         | default-k8s-different-port-20210816223418-6986             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| start   | -p no-preload-20210816223156-6986                          | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:42:31 UTC |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --preload=false                                |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                              |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:42:42 UTC | Mon, 16 Aug 2021 22:42:42 UTC |
	|         | no-preload-20210816223156-6986                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:18 UTC | Mon, 16 Aug 2021 22:43:37 UTC |
	|         | old-k8s-version-20210816223154-6986                        |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                |         |         |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                |         |         |                               |                               |
	|         | --disable-driver-mounts                                    |                                                |         |         |                               |                               |
	|         | --keep-context=false --driver=kvm2                         |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                |         |         |                               |                               |
	| start   | -p                                                         | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:36:55 UTC | Mon, 16 Aug 2021 22:43:44 UTC |
	|         | embed-certs-20210816223333-6986                            |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |         |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                |         |         |                               |                               |
	|         | --driver=kvm2                                              |                                                |         |         |                               |                               |
	|         | --container-runtime=containerd                             |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:43:47 UTC | Mon, 16 Aug 2021 22:43:47 UTC |
	|         | old-k8s-version-20210816223154-6986                        |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| start   | -p                                                         | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:37:25 UTC | Mon, 16 Aug 2021 22:43:52 UTC |
	|         | default-k8s-different-port-20210816223418-6986             |                                                |         |         |                               |                               |
	|         | --memory=2200 --alsologtostderr --wait=true                |                                                |         |         |                               |                               |
	|         | --apiserver-port=8444 --driver=kvm2                        |                                                |         |         |                               |                               |
	|         |  --container-runtime=containerd                            |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.21.3                               |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:43:55 UTC | Mon, 16 Aug 2021 22:43:55 UTC |
	|         | embed-certs-20210816223333-6986                            |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:44:03 UTC | Mon, 16 Aug 2021 22:44:03 UTC |
	|         | default-k8s-different-port-20210816223418-6986             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:44:30 UTC | Mon, 16 Aug 2021 22:44:30 UTC |
	|         | no-preload-20210816223156-6986                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | no-preload-20210816223156-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:44:31 UTC | Mon, 16 Aug 2021 22:44:31 UTC |
	|         | no-preload-20210816223156-6986                             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:45:42 UTC | Mon, 16 Aug 2021 22:45:43 UTC |
	|         | embed-certs-20210816223333-6986                            |                                                |         |         |                               |                               |
	| delete  | -p                                                         | embed-certs-20210816223333-6986                | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:45:43 UTC | Mon, 16 Aug 2021 22:45:43 UTC |
	|         | embed-certs-20210816223333-6986                            |                                                |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:45:50 UTC | Mon, 16 Aug 2021 22:45:51 UTC |
	|         | default-k8s-different-port-20210816223418-6986             |                                                |         |         |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210816223418-6986 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:45:51 UTC | Mon, 16 Aug 2021 22:45:51 UTC |
	|         | default-k8s-different-port-20210816223418-6986             |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210816224431-6986 --memory=2200            | newest-cni-20210816224431-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:44:31 UTC | Mon, 16 Aug 2021 22:45:57 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=containerd              |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| addons  | enable metrics-server -p                                   | newest-cni-20210816224431-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:45:57 UTC | Mon, 16 Aug 2021 22:45:58 UTC |
	|         | newest-cni-20210816224431-6986                             |                                                |         |         |                               |                               |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                                                |         |         |                               |                               |
	|         | --registries=MetricsServer=fake.domain                     |                                                |         |         |                               |                               |
	| stop    | -p                                                         | newest-cni-20210816224431-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:45:58 UTC | Mon, 16 Aug 2021 22:46:00 UTC |
	|         | newest-cni-20210816224431-6986                             |                                                |         |         |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |         |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210816224431-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:46:00 UTC | Mon, 16 Aug 2021 22:46:00 UTC |
	|         | newest-cni-20210816224431-6986                             |                                                |         |         |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:46:35 UTC | Mon, 16 Aug 2021 22:46:36 UTC |
	|         | old-k8s-version-20210816223154-6986                        |                                                |         |         |                               |                               |
	| delete  | -p                                                         | old-k8s-version-20210816223154-6986            | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:46:36 UTC | Mon, 16 Aug 2021 22:46:36 UTC |
	|         | old-k8s-version-20210816223154-6986                        |                                                |         |         |                               |                               |
	| start   | -p newest-cni-20210816224431-6986 --memory=2200            | newest-cni-20210816224431-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:46:00 UTC | Mon, 16 Aug 2021 22:47:19 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |         |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |         |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |         |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |         |                               |                               |
	|         | --driver=kvm2  --container-runtime=containerd              |                                                |         |         |                               |                               |
	|         | --kubernetes-version=v1.22.0-rc.0                          |                                                |         |         |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210816224431-6986                 | jenkins | v1.22.0 | Mon, 16 Aug 2021 22:47:19 UTC | Mon, 16 Aug 2021 22:47:19 UTC |
	|         | newest-cni-20210816224431-6986                             |                                                |         |         |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |         |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|---------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 22:46:00
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 22:46:00.956222   21514 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:46:00.956287   21514 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:46:00.956291   21514 out.go:311] Setting ErrFile to fd 2...
	I0816 22:46:00.956294   21514 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:46:00.956394   21514 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:46:00.956595   21514 out.go:305] Setting JSON to false
	I0816 22:46:00.992762   21514 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":5323,"bootTime":1629148638,"procs":166,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:46:00.992863   21514 start.go:121] virtualization: kvm guest
	I0816 22:46:00.995858   21514 out.go:177] * [newest-cni-20210816224431-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:46:00.997413   21514 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:46:00.996082   21514 notify.go:169] Checking for updates...
	I0816 22:46:00.998918   21514 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:46:01.000333   21514 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:46:01.001708   21514 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:46:01.002109   21514 config.go:177] Loaded profile config "newest-cni-20210816224431-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0816 22:46:01.002475   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:46:01.002538   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:46:01.013174   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46193
	I0816 22:46:01.013609   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:46:01.014137   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:46:01.014162   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:46:01.014568   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:46:01.014744   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:46:01.014913   21514 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:46:01.015252   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:46:01.015289   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:46:01.029317   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:42513
	I0816 22:46:01.029739   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:46:01.030234   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:46:01.030256   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:46:01.030617   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:46:01.030779   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:46:01.061460   21514 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 22:46:01.061485   21514 start.go:278] selected driver: kvm2
	I0816 22:46:01.061518   21514 start.go:751] validating driver "kvm2" against &{Name:newest-cni-20210816224431-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816224431-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.116.132 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map
[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:46:01.061658   21514 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 22:46:01.062901   21514 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:46:01.063049   21514 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 22:46:01.074223   21514 install.go:137] /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2 version is 1.22.0
	I0816 22:46:01.074557   21514 start_flags.go:716] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0816 22:46:01.074586   21514 cni.go:93] Creating CNI manager for ""
	I0816 22:46:01.074600   21514 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:46:01.074625   21514 start_flags.go:277] config:
	{Name:newest-cni-20210816224431-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816224431-6986 Namespace:d
efault APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.116.132 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:f
alse default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:46:01.074737   21514 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 22:46:01.076704   21514 out.go:177] * Starting control plane node newest-cni-20210816224431-6986 in cluster newest-cni-20210816224431-6986
	I0816 22:46:01.076721   21514 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 22:46:01.076741   21514 preload.go:147] Found local preload: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0816 22:46:01.076752   21514 cache.go:56] Caching tarball of preloaded images
	I0816 22:46:01.076861   21514 preload.go:173] Found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0816 22:46:01.076876   21514 cache.go:59] Finished verifying existence of preloaded tar for  v1.22.0-rc.0 on containerd
	I0816 22:46:01.076972   21514 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/config.json ...
	I0816 22:46:01.077116   21514 cache.go:205] Successfully downloaded all kic artifacts
	I0816 22:46:01.077136   21514 start.go:313] acquiring machines lock for newest-cni-20210816224431-6986: {Name:mk808edd60d1305a42bb85791729eff4573dbb15 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0816 22:46:01.077179   21514 start.go:317] acquired machines lock for "newest-cni-20210816224431-6986" in 30.987µs
	I0816 22:46:01.077197   21514 start.go:93] Skipping create...Using existing machine configuration
	I0816 22:46:01.077204   21514 fix.go:55] fixHost starting: 
	I0816 22:46:01.077461   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:46:01.077497   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:46:01.087093   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43639
	I0816 22:46:01.087457   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:46:01.087883   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:46:01.087904   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:46:01.088209   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:46:01.088376   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:46:01.088508   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetState
	I0816 22:46:01.091360   21514 fix.go:108] recreateIfNeeded on newest-cni-20210816224431-6986: state=Stopped err=<nil>
	I0816 22:46:01.091403   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	W0816 22:46:01.091557   21514 fix.go:134] unexpected machine state, will restart: <nil>
	I0816 22:46:01.093564   21514 out.go:177] * Restarting existing kvm2 VM for "newest-cni-20210816224431-6986" ...
	I0816 22:46:01.093589   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Start
	I0816 22:46:01.093756   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Ensuring networks are active...
	I0816 22:46:01.095569   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Ensuring network default is active
	I0816 22:46:01.095849   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Ensuring network mk-newest-cni-20210816224431-6986 is active
	I0816 22:46:01.096218   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Getting domain xml...
	I0816 22:46:01.097869   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Creating domain...
	I0816 22:46:01.545908   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Waiting to get IP...
	I0816 22:46:01.547122   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:01.547603   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has current primary IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:01.547636   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Found IP for machine: 192.168.116.132
	I0816 22:46:01.547653   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Reserving static IP address...
	I0816 22:46:01.548190   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "newest-cni-20210816224431-6986", mac: "52:54:00:50:9a:fc", ip: "192.168.116.132"} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:44:46 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:01.548220   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Reserved static IP address: 192.168.116.132
	I0816 22:46:01.548260   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | skip adding static IP to network mk-newest-cni-20210816224431-6986 - found existing host DHCP lease matching {name: "newest-cni-20210816224431-6986", mac: "52:54:00:50:9a:fc", ip: "192.168.116.132"}
	I0816 22:46:01.548282   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Getting to WaitForSSH function...
	I0816 22:46:01.548306   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Waiting for SSH to be available...
	I0816 22:46:01.553811   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:01.554159   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:44:46 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:01.554194   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:01.554331   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Using SSH client type: external
	I0816 22:46:01.554364   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Using SSH private key: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa (-rw-------)
	I0816 22:46:01.554403   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.116.132 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0816 22:46:01.554424   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | About to run SSH command:
	I0816 22:46:01.554441   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | exit 0
	I0816 22:46:13.711429   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | SSH cmd err, output: <nil>: 
	I0816 22:46:13.711794   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetConfigRaw
	I0816 22:46:13.712501   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetIP
	I0816 22:46:13.717700   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:13.718066   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:13.718095   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:13.718293   21514 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/config.json ...
	I0816 22:46:13.718450   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:46:13.718632   21514 machine.go:88] provisioning docker machine ...
	I0816 22:46:13.718660   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:46:13.718876   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetMachineName
	I0816 22:46:13.719024   21514 buildroot.go:166] provisioning hostname "newest-cni-20210816224431-6986"
	I0816 22:46:13.719050   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetMachineName
	I0816 22:46:13.719168   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:46:13.723341   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:13.723617   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:13.723646   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:13.723737   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:46:13.723862   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:46:13.724023   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:46:13.724126   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:46:13.724310   21514 main.go:130] libmachine: Using SSH client type: native
	I0816 22:46:13.724511   21514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.116.132 22 <nil> <nil>}
	I0816 22:46:13.724533   21514 main.go:130] libmachine: About to run SSH command:
	sudo hostname newest-cni-20210816224431-6986 && echo "newest-cni-20210816224431-6986" | sudo tee /etc/hostname
	I0816 22:46:13.885851   21514 main.go:130] libmachine: SSH cmd err, output: <nil>: newest-cni-20210816224431-6986
	
	I0816 22:46:13.885885   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:46:13.891469   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:13.891848   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:13.891880   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:13.892054   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:46:13.892247   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:46:13.892430   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:46:13.892570   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:46:13.892725   21514 main.go:130] libmachine: Using SSH client type: native
	I0816 22:46:13.892855   21514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.116.132 22 <nil> <nil>}
	I0816 22:46:13.892874   21514 main.go:130] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-20210816224431-6986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-20210816224431-6986/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-20210816224431-6986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0816 22:46:14.016803   21514 main.go:130] libmachine: SSH cmd err, output: <nil>: 
	I0816 22:46:14.016831   21514 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.
pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube}
	I0816 22:46:14.016869   21514 buildroot.go:174] setting up certificates
	I0816 22:46:14.016878   21514 provision.go:83] configureAuth start
	I0816 22:46:14.016889   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetMachineName
	I0816 22:46:14.017148   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetIP
	I0816 22:46:14.022791   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.023127   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:14.023164   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.023264   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:46:14.027541   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.027828   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:14.027855   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.027932   21514 provision.go:138] copyHostCerts
	I0816 22:46:14.027991   21514 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem, removing ...
	I0816 22:46:14.028000   21514 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem
	I0816 22:46:14.028065   21514 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.pem (1078 bytes)
	I0816 22:46:14.028165   21514 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem, removing ...
	I0816 22:46:14.028176   21514 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem
	I0816 22:46:14.028210   21514 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cert.pem (1123 bytes)
	I0816 22:46:14.028271   21514 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem, removing ...
	I0816 22:46:14.028281   21514 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem
	I0816 22:46:14.028307   21514 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/key.pem (1675 bytes)
	I0816 22:46:14.028356   21514 provision.go:112] generating server cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem org=jenkins.newest-cni-20210816224431-6986 san=[192.168.116.132 192.168.116.132 localhost 127.0.0.1 minikube newest-cni-20210816224431-6986]
	I0816 22:46:14.209466   21514 provision.go:172] copyRemoteCerts
	I0816 22:46:14.209519   21514 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0816 22:46:14.209549   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:46:14.214515   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.214812   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:14.214840   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.215055   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:46:14.215206   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:46:14.215321   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:46:14.215401   21514 sshutil.go:53] new ssh client: &{IP:192.168.116.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa Username:docker}
	I0816 22:46:14.294212   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0816 22:46:14.310191   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server.pem --> /etc/docker/server.pem (1265 bytes)
	I0816 22:46:14.325847   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0816 22:46:14.341302   21514 provision.go:86] duration metric: configureAuth took 324.411074ms
	I0816 22:46:14.341324   21514 buildroot.go:189] setting minikube options for container-runtime
	I0816 22:46:14.341509   21514 config.go:177] Loaded profile config "newest-cni-20210816224431-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0816 22:46:14.341520   21514 machine.go:91] provisioned docker machine in 622.872994ms
	I0816 22:46:14.341536   21514 start.go:267] post-start starting for "newest-cni-20210816224431-6986" (driver="kvm2")
	I0816 22:46:14.341548   21514 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0816 22:46:14.341577   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:46:14.341895   21514 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0816 22:46:14.341934   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:46:14.346801   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.347136   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:14.347165   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.347297   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:46:14.347462   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:46:14.347601   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:46:14.347734   21514 sshutil.go:53] new ssh client: &{IP:192.168.116.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa Username:docker}
	I0816 22:46:14.426622   21514 ssh_runner.go:149] Run: cat /etc/os-release
	I0816 22:46:14.431162   21514 info.go:137] Remote host: Buildroot 2020.02.12
	I0816 22:46:14.431180   21514 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/addons for local assets ...
	I0816 22:46:14.431237   21514 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files for local assets ...
	I0816 22:46:14.431327   21514 filesync.go:149] local asset: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem -> 69862.pem in /etc/ssl/certs
	I0816 22:46:14.431435   21514 ssh_runner.go:149] Run: sudo mkdir -p /etc/ssl/certs
	I0816 22:46:14.438129   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:46:14.454171   21514 start.go:270] post-start completed in 112.617965ms
	I0816 22:46:14.454221   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:46:14.454460   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:46:14.459422   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.459760   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:14.459783   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.459922   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:46:14.460089   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:46:14.460310   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:46:14.460450   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:46:14.460606   21514 main.go:130] libmachine: Using SSH client type: native
	I0816 22:46:14.460744   21514 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x803000] 0x802fc0 <nil>  [] 0s} 192.168.116.132 22 <nil> <nil>}
	I0816 22:46:14.460755   21514 main.go:130] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0816 22:46:14.567778   21514 main.go:130] libmachine: SSH cmd err, output: <nil>: 1629153974.479290825
	
	I0816 22:46:14.567798   21514 fix.go:212] guest clock: 1629153974.479290825
	I0816 22:46:14.567806   21514 fix.go:225] Guest: 2021-08-16 22:46:14.479290825 +0000 UTC Remote: 2021-08-16 22:46:14.454443208 +0000 UTC m=+13.541804922 (delta=24.847617ms)
	I0816 22:46:14.567855   21514 fix.go:196] guest clock delta is within tolerance: 24.847617ms
	I0816 22:46:14.567864   21514 fix.go:57] fixHost completed within 13.490659677s
	I0816 22:46:14.567870   21514 start.go:80] releasing machines lock for "newest-cni-20210816224431-6986", held for 13.490681925s
	I0816 22:46:14.567941   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:46:14.568190   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetIP
	I0816 22:46:14.572979   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.573270   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:14.573308   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.573419   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:46:14.573594   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:46:14.574005   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:46:14.574201   21514 ssh_runner.go:149] Run: systemctl --version
	I0816 22:46:14.574222   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:46:14.574279   21514 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0816 22:46:14.574314   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:46:14.580933   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.580958   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.581226   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:14.581260   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.581303   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:14.581321   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:14.581378   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:46:14.581467   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:46:14.581549   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:46:14.581625   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:46:14.581633   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:46:14.581738   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:46:14.581780   21514 sshutil.go:53] new ssh client: &{IP:192.168.116.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa Username:docker}
	I0816 22:46:14.581843   21514 sshutil.go:53] new ssh client: &{IP:192.168.116.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa Username:docker}
	I0816 22:46:14.660607   21514 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 22:46:14.660761   21514 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:46:18.660575   21514 ssh_runner.go:189] Completed: sudo crictl images --output json: (3.999784803s)
	I0816 22:46:18.660713   21514 containerd.go:609] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.22.0-rc.0". assuming images are not preloaded.
	I0816 22:46:18.660774   21514 ssh_runner.go:149] Run: which lz4
	I0816 22:46:18.664943   21514 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0816 22:46:18.669122   21514 ssh_runner.go:306] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/preloaded.tar.lz4': No such file or directory
	I0816 22:46:18.669148   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (945588089 bytes)
	I0816 22:46:23.666976   21514 containerd.go:546] Took 5.002065 seconds to copy over tarball
	I0816 22:46:23.667039   21514 ssh_runner.go:149] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0816 22:46:33.630408   21514 ssh_runner.go:189] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (9.963347626s)
	I0816 22:46:33.630438   21514 containerd.go:553] Took 9.963432 seconds t extract the tarball
	I0816 22:46:33.630451   21514 ssh_runner.go:100] rm: /preloaded.tar.lz4
	I0816 22:46:33.697380   21514 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:46:33.872986   21514 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:46:33.925109   21514 ssh_runner.go:149] Run: sudo systemctl stop -f crio
	I0816 22:46:34.408772   21514 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0816 22:46:34.422663   21514 docker.go:153] disabling docker service ...
	I0816 22:46:34.422719   21514 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0816 22:46:34.433892   21514 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0816 22:46:34.443063   21514 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0816 22:46:34.560029   21514 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0816 22:46:34.692430   21514 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0816 22:46:34.703585   21514 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0816 22:46:34.717772   21514 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %!s(MISSING) "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKW2dycGNdCiAgYWRkcmVzcyA9ICIvcnVuL2NvbnRhaW5lcmQvY29udGFpbmVyZC5zb2NrIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbWF4X3JlY3ZfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKICBtYXhfc2VuZF9tZXNzYWdlX3NpemUgPSAxNjc3NzIxNgoKW2RlYnVnXQogIGFkZHJlc3MgPSAiIgogIHVpZCA9IDAKICBnaWQgPSAwCiAgbGV2ZWwgPSAiIgoKW21ldHJpY3NdCiAgYWRkcmVzcyA9ICIiCiAgZ3JwY19oaXN0b2dyYW0gPSBmYWxzZQoKW2Nncm91cF0KICBwYXRoID0gIiIKCltwbHVnaW5zXQogIFtwbHVnaW5zLmNncm91cHNdCiAgICBub19wcm9tZXRoZXVzID0gZmFsc2UKICBbcGx1Z2lucy5jcmldCiAgICBzdHJlYW1fc2VydmVyX2FkZHJlc3MgPSAiIgogICAgc3RyZWFtX3NlcnZlcl9wb3J0ID0gIjEwMDEwIgogICAgZW5hYmxlX3NlbGludXggPSBmYWxzZQogICAgc2FuZGJveF9pbWFnZSA9ICJrOHMuZ2NyLmlvL3BhdXNlOjMuNC4xIgogICAgc3RhdHNfY29sbGVjdF9wZXJpb2QgPSAxMAogICAgZW5hYmxlX3Rsc19zdHJlYW1pbmcgPSBmYWxzZQogICAgbWF4X2NvbnRhaW5lcl9sb2dfbGluZV9zaXplID0gMTYzODQKCglbcGx1Z2lucy4iaW8uY
29udGFpbmVyZC5ncnBjLnYxLmNyaSJdCiAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkXQogICAgICAgIFtwbHVnaW5zLiJpby5jb250YWluZXJkLmdycGMudjEuY3JpIi5jb250YWluZXJkLnJ1bnRpbWVzXQogICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuY10KICAgICAgICAgICAgcnVudGltZV90eXBlID0gImlvLmNvbnRhaW5lcmQucnVuYy52MiIKICAgICAgICAgICAgW3BsdWdpbnMuImlvLmNvbnRhaW5lcmQuZ3JwYy52MS5jcmkiLmNvbnRhaW5lcmQucnVudGltZXMucnVuYy5vcHRpb25zXQogICAgICAgICAgICAgIFN5c3RlbWRDZ3JvdXAgPSBmYWxzZQoKICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkXQogICAgICBzbmFwc2hvdHRlciA9ICJvdmVybGF5ZnMiCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW5jLnYyIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kI
gogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMuc2NoZWR1bGVyXQogICAgcGF1c2VfdGhyZXNob2xkID0gMC4wMgogICAgZGVsZXRpb25fdGhyZXNob2xkID0gMAogICAgbXV0YXRpb25fdGhyZXNob2xkID0gMTAwCiAgICBzY2hlZHVsZV9kZWxheSA9ICIwcyIKICAgIHN0YXJ0dXBfZGVsYXkgPSAiMTAwbXMiCg==" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0816 22:46:34.731601   21514 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0816 22:46:34.738627   21514 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0816 22:46:34.738708   21514 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0816 22:46:34.755558   21514 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0816 22:46:34.762000   21514 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0816 22:46:34.882126   21514 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0816 22:46:35.437447   21514 start.go:392] Will wait 60s for socket path /run/containerd/containerd.sock
	I0816 22:46:35.437510   21514 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:46:35.444999   21514 retry.go:31] will retry after 1.104660288s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/run/containerd/containerd.sock': No such file or directory
	I0816 22:46:36.550448   21514 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0816 22:46:36.556120   21514 start.go:413] Will wait 60s for crictl version
	I0816 22:46:36.556169   21514 ssh_runner.go:149] Run: sudo crictl version
	I0816 22:46:36.591005   21514 start.go:422] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.4.9
	RuntimeApiVersion:  v1alpha2
	I0816 22:46:36.591064   21514 ssh_runner.go:149] Run: containerd --version
	I0816 22:46:36.624579   21514 ssh_runner.go:149] Run: containerd --version
	I0816 22:46:36.659274   21514 out.go:177] * Preparing Kubernetes v1.22.0-rc.0 on containerd 1.4.9 ...
	I0816 22:46:36.659313   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetIP
	I0816 22:46:36.664585   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:36.664982   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:46:36.665013   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:46:36.665215   21514 ssh_runner.go:149] Run: grep 192.168.116.1	host.minikube.internal$ /etc/hosts
	I0816 22:46:36.669325   21514 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.116.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:46:36.681235   21514 out.go:177]   - kubelet.network-plugin=cni
	I0816 22:46:36.682684   21514 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I0816 22:46:36.682745   21514 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 22:46:36.682793   21514 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:46:36.716619   21514 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:46:36.716644   21514 containerd.go:517] Images already preloaded, skipping extraction
	I0816 22:46:36.716687   21514 ssh_runner.go:149] Run: sudo crictl images --output json
	I0816 22:46:36.750348   21514 containerd.go:613] all images are preloaded for containerd runtime.
	I0816 22:46:36.750371   21514 cache_images.go:74] Images are preloaded, skipping loading
	I0816 22:46:36.750417   21514 ssh_runner.go:149] Run: sudo crictl info
	I0816 22:46:36.782094   21514 cni.go:93] Creating CNI manager for ""
	I0816 22:46:36.782123   21514 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:46:36.782142   21514 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I0816 22:46:36.782161   21514 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.116.132 APIServerPort:8443 KubernetesVersion:v1.22.0-rc.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-20210816224431-6986 NodeName:newest-cni-20210816224431-6986 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.116.132"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true l
eader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.116.132 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0816 22:46:36.782392   21514 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.116.132
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "newest-cni-20210816224431-6986"
	  kubeletExtraArgs:
	    node-ip: 192.168.116.132
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.116.132"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.22.0-rc.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0816 22:46:36.782530   21514 kubeadm.go:909] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.22.0-rc.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-20210816224431-6986 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.116.132 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.22.0-rc.0 ClusterName:newest-cni-20210816224431-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0816 22:46:36.782657   21514 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.22.0-rc.0
	I0816 22:46:36.789591   21514 binaries.go:44] Found k8s binaries, skipping transfer
	I0816 22:46:36.789643   21514 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0816 22:46:36.796161   21514 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (589 bytes)
	I0816 22:46:36.807488   21514 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0816 22:46:36.818436   21514 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I0816 22:46:36.830115   21514 ssh_runner.go:149] Run: grep 192.168.116.132	control-plane.minikube.internal$ /etc/hosts
	I0816 22:46:36.833656   21514 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.116.132	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0816 22:46:36.842841   21514 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986 for IP: 192.168.116.132
	I0816 22:46:36.842880   21514 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key
	I0816 22:46:36.842897   21514 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key
	I0816 22:46:36.842957   21514 certs.go:293] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/client.key
	I0816 22:46:36.842975   21514 certs.go:293] skipping minikube signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/apiserver.key.c979f591
	I0816 22:46:36.842990   21514 certs.go:293] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/proxy-client.key
	I0816 22:46:36.843091   21514 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem (1338 bytes)
	W0816 22:46:36.843126   21514 certs.go:372] ignoring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986_empty.pem, impossibly tiny 0 bytes
	I0816 22:46:36.843135   21514 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca-key.pem (1675 bytes)
	I0816 22:46:36.843158   21514 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/ca.pem (1078 bytes)
	I0816 22:46:36.843185   21514 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/cert.pem (1123 bytes)
	I0816 22:46:36.843209   21514 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/key.pem (1675 bytes)
	I0816 22:46:36.843255   21514 certs.go:376] found cert: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem (1708 bytes)
	I0816 22:46:36.844157   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0816 22:46:36.860384   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0816 22:46:36.875586   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0816 22:46:36.891375   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/newest-cni-20210816224431-6986/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0816 22:46:36.906497   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0816 22:46:36.921933   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0816 22:46:36.937186   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0816 22:46:36.952583   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0816 22:46:36.968121   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/ssl/certs/69862.pem --> /usr/share/ca-certificates/69862.pem (1708 bytes)
	I0816 22:46:36.983172   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0816 22:46:36.998345   21514 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/certs/6986.pem --> /usr/share/ca-certificates/6986.pem (1338 bytes)
	I0816 22:46:37.013200   21514 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0816 22:46:37.024774   21514 ssh_runner.go:149] Run: openssl version
	I0816 22:46:37.030223   21514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69862.pem && ln -fs /usr/share/ca-certificates/69862.pem /etc/ssl/certs/69862.pem"
	I0816 22:46:37.038049   21514 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/69862.pem
	I0816 22:46:37.042253   21514 certs.go:419] hashing: -rw-r--r-- 1 root root 1708 Aug 16 21:49 /usr/share/ca-certificates/69862.pem
	I0816 22:46:37.042297   21514 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69862.pem
	I0816 22:46:37.047884   21514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69862.pem /etc/ssl/certs/3ec20f2e.0"
	I0816 22:46:37.055065   21514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0816 22:46:37.063538   21514 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:46:37.068408   21514 certs.go:419] hashing: -rw-r--r-- 1 root root 1111 Aug 16 21:42 /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:46:37.068450   21514 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0816 22:46:37.074223   21514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0816 22:46:37.082919   21514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6986.pem && ln -fs /usr/share/ca-certificates/6986.pem /etc/ssl/certs/6986.pem"
	I0816 22:46:37.090259   21514 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/6986.pem
	I0816 22:46:37.094833   21514 certs.go:419] hashing: -rw-r--r-- 1 root root 1338 Aug 16 21:49 /usr/share/ca-certificates/6986.pem
	I0816 22:46:37.094872   21514 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6986.pem
	I0816 22:46:37.100236   21514 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6986.pem /etc/ssl/certs/51391683.0"
	I0816 22:46:37.108106   21514 kubeadm.go:390] StartCluster: {Name:newest-cni-20210816224431-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0
-rc.0 ClusterName:newest-cni-20210816224431-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:network-plugin Value:cni} {Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.116.132 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[Dashboard:kubernetesui/dashboard:v2.1.0@sha256:7f80b5ba141bead69c4fee8661464857af300d7d7ed0274cf7beecedc00322e6 MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fak
e.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 22:46:37.108196   21514 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0816 22:46:37.108258   21514 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:46:37.142604   21514 cri.go:76] found id: ""
	I0816 22:46:37.142658   21514 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0816 22:46:37.149935   21514 kubeadm.go:401] found existing configuration files, will attempt cluster restart
	I0816 22:46:37.149952   21514 kubeadm.go:600] restartCluster start
	I0816 22:46:37.149986   21514 ssh_runner.go:149] Run: sudo test -d /data/minikube
	I0816 22:46:37.157345   21514 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:37.157908   21514 kubeconfig.go:117] verify returned: extract IP: "newest-cni-20210816224431-6986" does not appear in /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:46:37.157990   21514 kubeconfig.go:128] "newest-cni-20210816224431-6986" context is missing from /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig - will repair!
	I0816 22:46:37.158255   21514 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:46:37.160402   21514 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0816 22:46:37.166131   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:37.166167   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:37.174523   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:37.374936   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:37.375017   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:37.385099   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:37.575393   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:37.575481   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:37.584871   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:37.775155   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:37.775234   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:37.784662   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:37.975008   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:37.975070   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:37.984653   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:38.174972   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:38.175057   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:38.184448   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:38.374665   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:38.374747   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:38.384299   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:38.575587   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:38.575663   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:38.585029   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:38.775335   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:38.775415   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:38.785693   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:38.975002   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:38.975078   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:38.984778   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:39.175076   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:39.175149   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:39.184610   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:39.374892   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:39.374971   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:39.383974   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:39.575317   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:39.575390   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:39.584949   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:39.775268   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:39.775339   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:39.784800   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:39.975121   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:39.975206   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:39.984672   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:40.174992   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:40.175059   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:40.184610   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:40.184626   21514 api_server.go:164] Checking apiserver status ...
	I0816 22:46:40.184661   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0816 22:46:40.192955   21514 api_server.go:168] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0816 22:46:40.192971   21514 kubeadm.go:575] needs reconfigure: apiserver error: timed out waiting for the condition
	I0816 22:46:40.192979   21514 kubeadm.go:1032] stopping kube-system containers ...
	I0816 22:46:40.192993   21514 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0816 22:46:40.193033   21514 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0816 22:46:40.225257   21514 cri.go:76] found id: ""
	I0816 22:46:40.225327   21514 ssh_runner.go:149] Run: sudo systemctl stop kubelet
	I0816 22:46:40.239043   21514 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0816 22:46:40.245591   21514 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0816 22:46:40.245637   21514 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0816 22:46:40.252666   21514 kubeadm.go:676] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0816 22:46:40.252681   21514 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:46:40.376975   21514 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:46:41.109564   21514 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:46:41.324369   21514 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:46:41.424440   21514 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:46:41.493471   21514 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:46:41.493538   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:42.006807   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:42.506383   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:43.006831   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:43.506171   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:44.006594   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:44.506170   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:45.006751   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:45.507085   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:46.006629   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:46.506151   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:47.006760   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:47.506357   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:48.006799   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:48.507144   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:49.006141   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:49.506735   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:50.007145   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:50.507149   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:51.007177   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:51.507155   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:52.006911   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:52.507067   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:53.007012   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:53.506901   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:54.006982   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:54.507107   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:55.006520   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:55.506203   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:56.006185   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:56.507184   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:57.006266   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:57.506538   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:58.006123   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:58.506228   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:59.006844   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:46:59.506533   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:47:00.006633   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:47:00.506323   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:47:01.007120   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:47:01.506361   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:47:02.006231   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:47:02.506994   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:47:03.006810   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:47:03.017529   21514 api_server.go:70] duration metric: took 21.524057535s to wait for apiserver process to appear ...
	I0816 22:47:03.017553   21514 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:47:03.017565   21514 api_server.go:239] Checking apiserver healthz at https://192.168.116.132:8443/healthz ...
	I0816 22:47:08.018341   21514 api_server.go:255] stopped: https://192.168.116.132:8443/healthz: Get "https://192.168.116.132:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 22:47:08.519155   21514 api_server.go:239] Checking apiserver healthz at https://192.168.116.132:8443/healthz ...
	I0816 22:47:13.520067   21514 api_server.go:255] stopped: https://192.168.116.132:8443/healthz: Get "https://192.168.116.132:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0816 22:47:14.018618   21514 api_server.go:239] Checking apiserver healthz at https://192.168.116.132:8443/healthz ...
	I0816 22:47:15.621567   21514 api_server.go:265] https://192.168.116.132:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0816 22:47:15.621601   21514 api_server.go:101] status: https://192.168.116.132:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0816 22:47:16.019288   21514 api_server.go:239] Checking apiserver healthz at https://192.168.116.132:8443/healthz ...
	I0816 22:47:16.033451   21514 api_server.go:265] https://192.168.116.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:47:16.033488   21514 api_server.go:101] status: https://192.168.116.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:47:16.518822   21514 api_server.go:239] Checking apiserver healthz at https://192.168.116.132:8443/healthz ...
	I0816 22:47:16.533617   21514 api_server.go:265] https://192.168.116.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:47:16.533647   21514 api_server.go:101] status: https://192.168.116.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:47:17.019266   21514 api_server.go:239] Checking apiserver healthz at https://192.168.116.132:8443/healthz ...
	I0816 22:47:17.027983   21514 api_server.go:265] https://192.168.116.132:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0816 22:47:17.028007   21514 api_server.go:101] status: https://192.168.116.132:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0816 22:47:17.518677   21514 api_server.go:239] Checking apiserver healthz at https://192.168.116.132:8443/healthz ...
	I0816 22:47:17.525408   21514 api_server.go:265] https://192.168.116.132:8443/healthz returned 200:
	ok
	I0816 22:47:17.532401   21514 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:47:17.532422   21514 api_server.go:129] duration metric: took 14.514863365s to wait for apiserver health ...
	I0816 22:47:17.532434   21514 cni.go:93] Creating CNI manager for ""
	I0816 22:47:17.532443   21514 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 22:47:17.534483   21514 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0816 22:47:17.534552   21514 ssh_runner.go:149] Run: sudo mkdir -p /etc/cni/net.d
	I0816 22:47:17.541855   21514 ssh_runner.go:316] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0816 22:47:17.554903   21514 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:47:17.569288   21514 system_pods.go:59] 9 kube-system pods found
	I0816 22:47:17.569339   21514 system_pods.go:61] "coredns-78fcd69978-749xf" [6208ac25-3942-4cd5-92a0-d06ca299a035] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:47:17.569354   21514 system_pods.go:61] "coredns-78fcd69978-wrcr7" [b15084b1-b422-4c47-88f7-4d530bd3bac6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:47:17.569363   21514 system_pods.go:61] "etcd-newest-cni-20210816224431-6986" [7c7af59d-474f-4e1b-ac73-7be825aca15a] Running
	I0816 22:47:17.569372   21514 system_pods.go:61] "kube-apiserver-newest-cni-20210816224431-6986" [79a8398a-a4c7-4df7-b7f3-19fc26e17bc2] Running
	I0816 22:47:17.569383   21514 system_pods.go:61] "kube-controller-manager-newest-cni-20210816224431-6986" [1c11c512-e7d6-4771-b3f9-61a2bae963f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 22:47:17.569391   21514 system_pods.go:61] "kube-proxy-w6x94" [e29d54cf-57fb-4827-b095-c860cd21c86b] Running
	I0816 22:47:17.569398   21514 system_pods.go:61] "kube-scheduler-newest-cni-20210816224431-6986" [23dd165d-31cf-45d5-82e7-ffdf6488e24f] Running
	I0816 22:47:17.569404   21514 system_pods.go:61] "metrics-server-7c784ccb57-67p7g" [3a4df2ef-ed07-404d-8363-6ce11af1d8db] Pending
	I0816 22:47:17.569412   21514 system_pods.go:61] "storage-provisioner" [33a4ad67-b26f-4f7c-911b-c995de029df2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:47:17.569418   21514 system_pods.go:74] duration metric: took 14.501421ms to wait for pod list to return data ...
	I0816 22:47:17.569427   21514 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:47:17.573647   21514 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:47:17.573678   21514 node_conditions.go:123] node cpu capacity is 2
	I0816 22:47:17.573694   21514 node_conditions.go:105] duration metric: took 4.261615ms to run NodePressure ...
	I0816 22:47:17.573710   21514 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.22.0-rc.0:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0816 22:47:17.955018   21514 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0816 22:47:17.965698   21514 ops.go:34] apiserver oom_adj: -16
	I0816 22:47:17.965718   21514 kubeadm.go:604] restartCluster took 40.81576111s
	I0816 22:47:17.965728   21514 kubeadm.go:392] StartCluster complete in 40.857628318s
	I0816 22:47:17.965758   21514 settings.go:142] acquiring lock: {Name:mk1500b3775cb0c129f78af92eabf0aeaaa54b94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:47:17.965867   21514 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:47:17.966454   21514 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig: {Name:mk18a025ba02245ddb30d7f1b7fc3420209446cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0816 22:47:17.973348   21514 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-20210816224431-6986" rescaled to 1
	I0816 22:47:17.973401   21514 start.go:226] Will wait 6m0s for node &{Name: IP:192.168.116.132 Port:8443 KubernetesVersion:v1.22.0-rc.0 ControlPlane:true Worker:true}
	I0816 22:47:17.973417   21514 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0816 22:47:17.975259   21514 out.go:177] * Verifying Kubernetes components...
	I0816 22:47:17.975310   21514 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 22:47:17.973431   21514 addons.go:342] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I0816 22:47:17.975412   21514 addons.go:59] Setting storage-provisioner=true in profile "newest-cni-20210816224431-6986"
	I0816 22:47:17.975431   21514 addons.go:135] Setting addon storage-provisioner=true in "newest-cni-20210816224431-6986"
	W0816 22:47:17.975440   21514 addons.go:147] addon storage-provisioner should already be in state true
	I0816 22:47:17.975447   21514 addons.go:59] Setting dashboard=true in profile "newest-cni-20210816224431-6986"
	I0816 22:47:17.975467   21514 host.go:66] Checking if "newest-cni-20210816224431-6986" exists ...
	I0816 22:47:17.975470   21514 addons.go:135] Setting addon dashboard=true in "newest-cni-20210816224431-6986"
	W0816 22:47:17.975477   21514 addons.go:147] addon dashboard should already be in state true
	I0816 22:47:17.975477   21514 addons.go:59] Setting default-storageclass=true in profile "newest-cni-20210816224431-6986"
	I0816 22:47:17.975489   21514 addons.go:59] Setting metrics-server=true in profile "newest-cni-20210816224431-6986"
	I0816 22:47:17.975516   21514 addons.go:135] Setting addon metrics-server=true in "newest-cni-20210816224431-6986"
	W0816 22:47:17.975539   21514 addons.go:147] addon metrics-server should already be in state true
	I0816 22:47:17.975570   21514 host.go:66] Checking if "newest-cni-20210816224431-6986" exists ...
	I0816 22:47:17.975495   21514 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-20210816224431-6986"
	I0816 22:47:17.975948   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:47:17.975984   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:47:17.975496   21514 host.go:66] Checking if "newest-cni-20210816224431-6986" exists ...
	I0816 22:47:17.976064   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:47:17.976093   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:47:17.976249   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:47:17.976274   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:47:17.976449   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:47:17.973649   21514 config.go:177] Loaded profile config "newest-cni-20210816224431-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.22.0-rc.0
	I0816 22:47:17.976531   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:47:17.987685   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38235
	I0816 22:47:17.988132   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:47:17.988651   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:47:17.988673   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:47:17.989025   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:47:17.989564   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:47:17.989602   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:47:17.995295   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46003
	I0816 22:47:17.995521   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44273
	I0816 22:47:17.995761   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:47:17.995867   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:47:17.995901   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33445
	I0816 22:47:17.996237   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:47:17.996256   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:47:17.996292   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:47:17.996371   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:47:17.996387   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:47:17.996762   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:47:17.996785   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:47:17.996809   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:47:17.996953   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:47:17.997103   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:47:17.997121   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetState
	I0816 22:47:17.997436   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:47:17.997472   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:47:17.997636   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:47:17.997676   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:47:18.009519   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43111
	I0816 22:47:18.009973   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:47:18.010466   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:47:18.010489   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:47:18.010600   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:45397
	I0816 22:47:18.010852   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:47:18.010968   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:47:18.011155   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetState
	I0816 22:47:18.011525   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:47:18.011550   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:47:18.011909   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:47:18.012071   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetState
	I0816 22:47:18.012254   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33005
	I0816 22:47:18.012617   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:47:18.013055   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:47:18.013080   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:47:18.013616   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:47:18.013799   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetState
	I0816 22:47:18.014497   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:47:18.017145   21514 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I0816 22:47:18.017198   21514 addons.go:275] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0816 22:47:18.017211   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (396 bytes)
	I0816 22:47:18.015647   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:47:18.017230   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:47:18.015984   21514 addons.go:135] Setting addon default-storageclass=true in "newest-cni-20210816224431-6986"
	W0816 22:47:18.017291   21514 addons.go:147] addon default-storageclass should already be in state true
	I0816 22:47:18.017324   21514 host.go:66] Checking if "newest-cni-20210816224431-6986" exists ...
	I0816 22:47:18.016995   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:47:18.019081   21514 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0816 22:47:18.020492   21514 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I0816 22:47:18.017738   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:47:18.020558   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:47:18.019187   21514 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:47:18.021940   21514 out.go:177]   - Using image kubernetesui/dashboard:v2.1.0
	I0816 22:47:18.021942   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0816 22:47:18.021977   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:47:18.021992   21514 addons.go:275] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0816 22:47:18.022002   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0816 22:47:18.022020   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:47:18.022830   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:47:18.023389   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:47:18.023420   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:47:18.023567   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:47:18.023730   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:47:18.023875   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:47:18.024012   21514 sshutil.go:53] new ssh client: &{IP:192.168.116.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa Username:docker}
	I0816 22:47:18.029078   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:47:18.029458   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:47:18.029486   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:47:18.029669   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:47:18.029822   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:47:18.029842   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:47:18.030003   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:47:18.030151   21514 sshutil.go:53] new ssh client: &{IP:192.168.116.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa Username:docker}
	I0816 22:47:18.030183   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:47:18.030222   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:47:18.030364   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:47:18.030525   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:47:18.030674   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:47:18.030808   21514 sshutil.go:53] new ssh client: &{IP:192.168.116.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa Username:docker}
	I0816 22:47:18.033388   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:41481
	I0816 22:47:18.033762   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:47:18.034165   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:47:18.034185   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:47:18.034539   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:47:18.035111   21514 main.go:130] libmachine: Found binary path at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin/docker-machine-driver-kvm2
	I0816 22:47:18.035158   21514 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:47:18.046605   21514 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35131
	I0816 22:47:18.047012   21514 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:47:18.047466   21514 main.go:130] libmachine: Using API Version  1
	I0816 22:47:18.047491   21514 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:47:18.047805   21514 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:47:18.047975   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetState
	I0816 22:47:18.051135   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .DriverName
	I0816 22:47:18.051335   21514 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0816 22:47:18.051349   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0816 22:47:18.051367   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHHostname
	I0816 22:47:18.056714   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:47:18.057123   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:50:9a:fc", ip: ""} in network mk-newest-cni-20210816224431-6986: {Iface:virbr8 ExpiryTime:2021-08-16 23:46:12 +0000 UTC Type:0 Mac:52:54:00:50:9a:fc Iaid: IPaddr:192.168.116.132 Prefix:24 Hostname:newest-cni-20210816224431-6986 Clientid:01:52:54:00:50:9a:fc}
	I0816 22:47:18.057147   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | domain newest-cni-20210816224431-6986 has defined IP address 192.168.116.132 and MAC address 52:54:00:50:9a:fc in network mk-newest-cni-20210816224431-6986
	I0816 22:47:18.057265   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHPort
	I0816 22:47:18.057415   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHKeyPath
	I0816 22:47:18.057549   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .GetSSHUsername
	I0816 22:47:18.057683   21514 sshutil.go:53] new ssh client: &{IP:192.168.116.132 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/newest-cni-20210816224431-6986/id_rsa Username:docker}
	I0816 22:47:18.142562   21514 addons.go:275] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0816 22:47:18.142590   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1849 bytes)
	I0816 22:47:18.162118   21514 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0816 22:47:18.162138   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0816 22:47:18.193952   21514 addons.go:275] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0816 22:47:18.193976   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2042 bytes)
	I0816 22:47:18.220623   21514 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0816 22:47:18.278582   21514 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0816 22:47:18.281519   21514 addons.go:275] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0816 22:47:18.281537   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0816 22:47:18.288547   21514 addons.go:275] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:47:18.288568   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (418 bytes)
	I0816 22:47:18.288774   21514 api_server.go:50] waiting for apiserver process to appear ...
	I0816 22:47:18.288825   21514 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 22:47:18.290814   21514 start.go:708] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0816 22:47:18.298864   21514 addons.go:275] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0816 22:47:18.298884   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0816 22:47:18.328916   21514 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0816 22:47:18.335491   21514 addons.go:275] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0816 22:47:18.335511   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4196 bytes)
	I0816 22:47:18.401871   21514 addons.go:275] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0816 22:47:18.401895   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0816 22:47:18.478224   21514 addons.go:275] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0816 22:47:18.478261   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0816 22:47:18.573217   21514 addons.go:275] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0816 22:47:18.573256   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0816 22:47:18.607532   21514 addons.go:275] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0816 22:47:18.607561   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0816 22:47:18.648234   21514 addons.go:275] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:47:18.648264   21514 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0816 22:47:18.677095   21514 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0816 22:47:18.695961   21514 main.go:130] libmachine: Making call to close driver server
	I0816 22:47:18.695994   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Close
	I0816 22:47:18.696259   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Closing plugin on server side
	I0816 22:47:18.696264   21514 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:47:18.696282   21514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:47:18.696292   21514 main.go:130] libmachine: Making call to close driver server
	I0816 22:47:18.696302   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Close
	I0816 22:47:18.696523   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Closing plugin on server side
	I0816 22:47:18.696546   21514 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:47:18.696578   21514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:47:18.696599   21514 main.go:130] libmachine: Making call to close driver server
	I0816 22:47:18.696611   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Close
	I0816 22:47:18.696845   21514 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:47:18.696860   21514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:47:18.966627   21514 api_server.go:70] duration metric: took 993.191413ms to wait for apiserver process to appear ...
	I0816 22:47:18.966655   21514 api_server.go:86] waiting for apiserver healthz status ...
	I0816 22:47:18.966664   21514 api_server.go:239] Checking apiserver healthz at https://192.168.116.132:8443/healthz ...
	I0816 22:47:18.966631   21514 main.go:130] libmachine: Making call to close driver server
	I0816 22:47:18.966725   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Close
	I0816 22:47:18.967012   21514 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:47:18.967047   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Closing plugin on server side
	I0816 22:47:18.967057   21514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:47:18.967076   21514 main.go:130] libmachine: Making call to close driver server
	I0816 22:47:18.967090   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Close
	I0816 22:47:18.967309   21514 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:47:18.967328   21514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:47:18.974225   21514 api_server.go:265] https://192.168.116.132:8443/healthz returned 200:
	ok
	I0816 22:47:18.976413   21514 api_server.go:139] control plane version: v1.22.0-rc.0
	I0816 22:47:18.976436   21514 api_server.go:129] duration metric: took 9.774611ms to wait for apiserver health ...
	I0816 22:47:18.976447   21514 system_pods.go:43] waiting for kube-system pods to appear ...
	I0816 22:47:18.983223   21514 system_pods.go:59] 9 kube-system pods found
	I0816 22:47:18.983246   21514 system_pods.go:61] "coredns-78fcd69978-749xf" [6208ac25-3942-4cd5-92a0-d06ca299a035] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:47:18.983254   21514 system_pods.go:61] "coredns-78fcd69978-wrcr7" [b15084b1-b422-4c47-88f7-4d530bd3bac6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0816 22:47:18.983261   21514 system_pods.go:61] "etcd-newest-cni-20210816224431-6986" [7c7af59d-474f-4e1b-ac73-7be825aca15a] Running
	I0816 22:47:18.983279   21514 system_pods.go:61] "kube-apiserver-newest-cni-20210816224431-6986" [79a8398a-a4c7-4df7-b7f3-19fc26e17bc2] Running
	I0816 22:47:18.983286   21514 system_pods.go:61] "kube-controller-manager-newest-cni-20210816224431-6986" [1c11c512-e7d6-4771-b3f9-61a2bae963f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0816 22:47:18.983295   21514 system_pods.go:61] "kube-proxy-w6x94" [e29d54cf-57fb-4827-b095-c860cd21c86b] Running
	I0816 22:47:18.983302   21514 system_pods.go:61] "kube-scheduler-newest-cni-20210816224431-6986" [23dd165d-31cf-45d5-82e7-ffdf6488e24f] Running
	I0816 22:47:18.983306   21514 system_pods.go:61] "metrics-server-7c784ccb57-67p7g" [3a4df2ef-ed07-404d-8363-6ce11af1d8db] Pending
	I0816 22:47:18.983314   21514 system_pods.go:61] "storage-provisioner" [33a4ad67-b26f-4f7c-911b-c995de029df2] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0816 22:47:18.983323   21514 system_pods.go:74] duration metric: took 6.870298ms to wait for pod list to return data ...
	I0816 22:47:18.983333   21514 default_sa.go:34] waiting for default service account to be created ...
	I0816 22:47:18.995359   21514 default_sa.go:45] found service account: "default"
	I0816 22:47:18.995376   21514 default_sa.go:55] duration metric: took 12.037971ms for default service account to be created ...
	I0816 22:47:18.995384   21514 kubeadm.go:547] duration metric: took 1.02195268s to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0816 22:47:18.995404   21514 node_conditions.go:102] verifying NodePressure condition ...
	I0816 22:47:19.001993   21514 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0816 22:47:19.002015   21514 node_conditions.go:123] node cpu capacity is 2
	I0816 22:47:19.002026   21514 node_conditions.go:105] duration metric: took 6.618379ms to run NodePressure ...
	I0816 22:47:19.002035   21514 start.go:231] waiting for startup goroutines ...
	I0816 22:47:19.009785   21514 main.go:130] libmachine: Making call to close driver server
	I0816 22:47:19.009820   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Close
	I0816 22:47:19.010068   21514 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:47:19.010113   21514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:47:19.010131   21514 main.go:130] libmachine: Making call to close driver server
	I0816 22:47:19.010147   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Close
	I0816 22:47:19.010415   21514 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:47:19.010434   21514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:47:19.010445   21514 addons.go:313] Verifying addon metrics-server=true in "newest-cni-20210816224431-6986"
	I0816 22:47:19.010423   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Closing plugin on server side
	I0816 22:47:19.343310   21514 main.go:130] libmachine: Making call to close driver server
	I0816 22:47:19.343342   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Close
	I0816 22:47:19.343593   21514 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:47:19.343611   21514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:47:19.343622   21514 main.go:130] libmachine: Making call to close driver server
	I0816 22:47:19.343633   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) Calling .Close
	I0816 22:47:19.343645   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Closing plugin on server side
	I0816 22:47:19.343872   21514 main.go:130] libmachine: (newest-cni-20210816224431-6986) DBG | Closing plugin on server side
	I0816 22:47:19.343903   21514 main.go:130] libmachine: Successfully made call to close driver server
	I0816 22:47:19.343915   21514 main.go:130] libmachine: Making call to close connection to plugin binary
	I0816 22:47:19.345775   21514 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0816 22:47:19.345798   21514 addons.go:344] enableAddons completed in 1.372371026s
	I0816 22:47:19.389898   21514 start.go:462] kubectl: 1.20.5, cluster: 1.22.0-rc.0 (minor skew: 2)
	I0816 22:47:19.391727   21514 out.go:177] 
	W0816 22:47:19.391864   21514 out.go:242] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.22.0-rc.0.
	I0816 22:47:19.393267   21514 out.go:177]   - Want kubectl v1.22.0-rc.0? Try 'minikube kubectl -- get pods -A'
	I0816 22:47:19.394806   21514 out.go:177] * Done! kubectl is now configured to use "newest-cni-20210816224431-6986" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	5a21dc88c9d42       cf9cba6c3e4a8       47 seconds ago       Running             kube-controller-manager   2                   ca0f8c8c87908
	02ee26a5c84a1       0048118155842       54 seconds ago       Running             etcd                      1                   3c541e17d1ed2
	148f9e5acb809       b2462aa94d403       About a minute ago   Running             kube-apiserver            1                   8e4331d6b4771
	22afe5aae0f35       7da2efaa5b480       About a minute ago   Running             kube-scheduler            1                   29ad7d9cab126
	7b31f609984fc       cf9cba6c3e4a8       About a minute ago   Exited              kube-controller-manager   1                   ca0f8c8c87908
	
	* 
	* ==> containerd <==
	* -- Logs begin at Mon 2021-08-16 22:46:11 UTC, end at Mon 2021-08-16 22:48:03 UTC. --
	Aug 16 22:47:02 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:02.305636545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-newest-cni-20210816224431-6986,Uid:7e2f77f67b13e20a69cee33bb38b10ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5\""
	Aug 16 22:47:02 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:02.311758117Z" level=info msg="CreateContainer within sandbox \"8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:1,}"
	Aug 16 22:47:02 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:02.352003599Z" level=info msg="CreateContainer within sandbox \"8e4331d6b4771f2d12c85c73cc63bf986b3ef1ab26ffe9931451ac1b08b212b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:1,} returns container id \"148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019\""
	Aug 16 22:47:02 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:02.352991320Z" level=info msg="StartContainer for \"148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019\""
	Aug 16 22:47:02 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:02.718715209Z" level=info msg="StartContainer for \"148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019\" returns successfully"
	Aug 16 22:47:02 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:02.949062330Z" level=info msg="PullImage \"k8s.gcr.io/etcd:3.5.0-0\""
	Aug 16 22:47:08 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:08.913078583Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/etcd:3.5.0-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 16 22:47:08 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:08.918063001Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 16 22:47:08 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:08.922031349Z" level=info msg="ImageUpdate event &ImageUpdate{Name:k8s.gcr.io/etcd:3.5.0-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 16 22:47:08 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:08.925798579Z" level=info msg="ImageCreate event &ImageCreate{Name:k8s.gcr.io/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}"
	Aug 16 22:47:08 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:08.926163295Z" level=info msg="PullImage \"k8s.gcr.io/etcd:3.5.0-0\" returns image reference \"sha256:0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba\""
	Aug 16 22:47:08 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:08.930655779Z" level=info msg="CreateContainer within sandbox \"3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0\" for container &ContainerMetadata{Name:etcd,Attempt:1,}"
	Aug 16 22:47:09 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:09.245071482Z" level=info msg="CreateContainer within sandbox \"3c541e17d1ed24e43ba89f2dd33683dc84660bee7960f9b28c9d87c78d3070d0\" for &ContainerMetadata{Name:etcd,Attempt:1,} returns container id \"02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9\""
	Aug 16 22:47:09 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:09.246636133Z" level=info msg="StartContainer for \"02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9\""
	Aug 16 22:47:09 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:09.604805365Z" level=info msg="StartContainer for \"02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9\" returns successfully"
	Aug 16 22:47:15 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:15.680070890Z" level=info msg="Finish piping stderr of container \"7b31f609984fc54447e9df1259fba6a55768fcd53cfb0dc5bcfd7400326bdf1c\""
	Aug 16 22:47:15 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:15.683084322Z" level=info msg="TaskExit event &TaskExit{ContainerID:7b31f609984fc54447e9df1259fba6a55768fcd53cfb0dc5bcfd7400326bdf1c,ID:7b31f609984fc54447e9df1259fba6a55768fcd53cfb0dc5bcfd7400326bdf1c,Pid:2564,ExitStatus:255,ExitedAt:2021-08-16 22:47:15.682799946 +0000 UTC,XXX_unrecognized:[],}"
	Aug 16 22:47:15 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:15.683165531Z" level=info msg="Finish piping stdout of container \"7b31f609984fc54447e9df1259fba6a55768fcd53cfb0dc5bcfd7400326bdf1c\""
	Aug 16 22:47:15 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:15.791905476Z" level=info msg="shim disconnected" id=7b31f609984fc54447e9df1259fba6a55768fcd53cfb0dc5bcfd7400326bdf1c
	Aug 16 22:47:15 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:15.792052451Z" level=error msg="copy shim log" error="read /proc/self/fd/28: file already closed"
	Aug 16 22:47:15 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:15.812648180Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
	Aug 16 22:47:16 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:16.035371732Z" level=info msg="CreateContainer within sandbox \"ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}"
	Aug 16 22:47:16 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:16.084897532Z" level=info msg="CreateContainer within sandbox \"ca0f8c8c87908f1639c072e5d4aa2b1231c3662005c1c768ebd22fab9a78a3bf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217\""
	Aug 16 22:47:16 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:16.085865713Z" level=info msg="StartContainer for \"5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217\""
	Aug 16 22:47:16 newest-cni-20210816224431-6986 containerd[2154]: time="2021-08-16T22:47:16.699380721Z" level=info msg="StartContainer for \"5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217\" returns successfully"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* [Aug16 22:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.097101] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.794089] Unstable clock detected, switching default tracing clock to "global"
	              If you want to keep using the local clock, then add:
	                "trace_clock=local"
	              on the kernel command line
	[  +0.000018] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.426870] systemd-fstab-generator[1163]: Ignoring "noauto" for root device
	[  +0.033219] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +0.885094] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1734 comm=systemd-network
	[  +0.685985] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
	[  +0.430130] vboxguest: loading out-of-tree module taints kernel.
	[  +0.009638] vboxguest: PCI device not found, probably running on physical hardware.
	[ +20.659327] systemd-fstab-generator[2067]: Ignoring "noauto" for root device
	[  +0.697594] systemd-fstab-generator[2100]: Ignoring "noauto" for root device
	[  +0.120017] systemd-fstab-generator[2113]: Ignoring "noauto" for root device
	[  +0.200847] systemd-fstab-generator[2143]: Ignoring "noauto" for root device
	[  +6.430488] systemd-fstab-generator[2334]: Ignoring "noauto" for root device
	[Aug16 22:47] systemd-fstab-generator[2983]: Ignoring "noauto" for root device
	[  +0.682253] systemd-fstab-generator[3037]: Ignoring "noauto" for root device
	[  +0.890755] systemd-fstab-generator[3091]: Ignoring "noauto" for root device
	[Aug16 22:48] NFSD: Unable to end grace period: -110
	
	* 
	* ==> etcd [02ee26a5c84a12dfd5df596e946ee187b0651ddb8dfa661080351f7351bcf2a9] <==
	* {"level":"info","ts":"2021-08-16T22:47:09.668Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2021-08-16T22:47:09.671Z","caller":"etcdserver/server.go:834","msg":"starting etcd server","local-member-id":"b83c2c8e708b08c3","local-server-version":"3.5.0","cluster-id":"4b377c4094c7e6d4","cluster-version":"3.5"}
	{"level":"info","ts":"2021-08-16T22:47:09.671Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"b83c2c8e708b08c3","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2021-08-16T22:47:09.672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b83c2c8e708b08c3 switched to configuration voters=(13275534791866517699)"}
	{"level":"info","ts":"2021-08-16T22:47:09.673Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"4b377c4094c7e6d4","local-member-id":"b83c2c8e708b08c3","added-peer-id":"b83c2c8e708b08c3","added-peer-peer-urls":["https://192.168.116.132:2380"]}
	{"level":"info","ts":"2021-08-16T22:47:09.673Z","caller":"membership/cluster.go:523","msg":"updated cluster version","cluster-id":"4b377c4094c7e6d4","local-member-id":"b83c2c8e708b08c3","from":"3.5","to":"3.5"}
	{"level":"info","ts":"2021-08-16T22:47:09.681Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2021-08-16T22:47:09.681Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"b83c2c8e708b08c3","initial-advertise-peer-urls":["https://192.168.116.132:2380"],"listen-peer-urls":["https://192.168.116.132:2380"],"advertise-client-urls":["https://192.168.116.132:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.116.132:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2021-08-16T22:47:09.681Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2021-08-16T22:47:09.682Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.116.132:2380"}
	{"level":"info","ts":"2021-08-16T22:47:09.682Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.116.132:2380"}
	{"level":"info","ts":"2021-08-16T22:47:09.760Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b83c2c8e708b08c3 is starting a new election at term 2"}
	{"level":"info","ts":"2021-08-16T22:47:09.760Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b83c2c8e708b08c3 became pre-candidate at term 2"}
	{"level":"info","ts":"2021-08-16T22:47:09.761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b83c2c8e708b08c3 received MsgPreVoteResp from b83c2c8e708b08c3 at term 2"}
	{"level":"info","ts":"2021-08-16T22:47:09.761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b83c2c8e708b08c3 became candidate at term 3"}
	{"level":"info","ts":"2021-08-16T22:47:09.761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b83c2c8e708b08c3 received MsgVoteResp from b83c2c8e708b08c3 at term 3"}
	{"level":"info","ts":"2021-08-16T22:47:09.761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b83c2c8e708b08c3 became leader at term 3"}
	{"level":"info","ts":"2021-08-16T22:47:09.761Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b83c2c8e708b08c3 elected leader b83c2c8e708b08c3 at term 3"}
	{"level":"info","ts":"2021-08-16T22:47:09.761Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"b83c2c8e708b08c3","local-member-attributes":"{Name:newest-cni-20210816224431-6986 ClientURLs:[https://192.168.116.132:2379]}","request-path":"/0/members/b83c2c8e708b08c3/attributes","cluster-id":"4b377c4094c7e6d4","publish-timeout":"7s"}
	{"level":"info","ts":"2021-08-16T22:47:09.762Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-16T22:47:09.763Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2021-08-16T22:47:09.764Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2021-08-16T22:47:09.765Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
	{"level":"info","ts":"2021-08-16T22:47:09.765Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2021-08-16T22:47:09.766Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.116.132:2379"}
	
	* 
	* ==> kernel <==
	*  22:48:44 up 2 min,  0 users,  load average: 0.90, 0.48, 0.19
	Linux newest-cni-20210816224431-6986 4.19.182 #1 SMP Tue Aug 10 19:49:40 UTC 2021 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2020.02.12"
	
	* 
	* ==> kube-apiserver [148f9e5acb8090bc04efc0d346f7dfc2728bded026f4ee27b53bc6cea28ce019] <==
	* I0816 22:47:15.648945       1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0816 22:47:15.649383       1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0816 22:47:15.650190       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0816 22:47:15.650537       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
	I0816 22:47:15.650712       1 shared_informer.go:247] Caches are synced for crd-autoregister 
	I0816 22:47:15.708581       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
	I0816 22:47:15.796624       1 apf_controller.go:304] Running API Priority and Fairness config worker
	I0816 22:47:15.797178       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0816 22:47:15.798657       1 cache.go:39] Caches are synced for autoregister controller
	I0816 22:47:15.800124       1 shared_informer.go:247] Caches are synced for node_authorizer 
	I0816 22:47:15.801659       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0816 22:47:15.810550       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0816 22:47:16.545183       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0816 22:47:16.677027       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0816 22:47:16.677145       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	W0816 22:47:17.677600       1 handler_proxy.go:104] no RequestInfo found in the context
	E0816 22:47:17.677766       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0816 22:47:17.677790       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0816 22:47:17.793677       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0816 22:47:17.820427       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0816 22:47:17.897189       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0816 22:47:17.921165       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0816 22:47:17.937108       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0816 22:47:19.083662       1 controller.go:611] quota admission added evaluator for: namespaces
	
	* 
	* ==> kube-controller-manager [5a21dc88c9d42a80940e85fa9173b546afe44386b63fcd32bb286bc860637217] <==
	* I0816 22:47:19.470051       1 stateful_set.go:148] Starting stateful set controller
	I0816 22:47:19.470609       1 shared_informer.go:240] Waiting for caches to sync for stateful set
	I0816 22:47:19.473848       1 node_ipam_controller.go:91] Sending events to api server.
	W0816 22:48:04.475938       1 reflector.go:441] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:48:04.475931       1 reflector.go:441] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0816 22:48:04.476256       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.116.132:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": http2: client connection lost
	W0816 22:48:14.979343       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.116.132:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": net/http: TLS handshake timeout
	I0816 22:48:15.295365       1 trace.go:205] Trace[1626259082]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:48:05.294) (total time: 10000ms):
	Trace[1626259082]: [10.000743328s] [10.000743328s] END
	E0816 22:48:15.295617       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: Get "https://192.168.116.132:8443/api/v1/serviceaccounts?resourceVersion=557": net/http: TLS handshake timeout
	I0816 22:48:15.987290       1 trace.go:205] Trace[1793727087]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:48:05.985) (total time: 10001ms):
	Trace[1793727087]: [10.001624476s] [10.001624476s] END
	E0816 22:48:15.987427       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://192.168.116.132:8443/api/v1/secrets?resourceVersion=556": net/http: TLS handshake timeout
	W0816 22:48:25.981965       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.116.132:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": net/http: TLS handshake timeout
	I0816 22:48:28.196882       1 trace.go:205] Trace[1568984415]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:48:18.195) (total time: 10001ms):
	Trace[1568984415]: [10.001273742s] [10.001273742s] END
	E0816 22:48:28.197032       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: Get "https://192.168.116.132:8443/api/v1/serviceaccounts?resourceVersion=557": net/http: TLS handshake timeout
	I0816 22:48:29.014926       1 trace.go:205] Trace[1007682813]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:48:19.012) (total time: 10001ms):
	Trace[1007682813]: [10.00199212s] [10.00199212s] END
	E0816 22:48:29.015043       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Secret: failed to list *v1.Secret: Get "https://192.168.116.132:8443/api/v1/secrets?resourceVersion=556": net/http: TLS handshake timeout
	W0816 22:48:37.984609       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.116.132:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": net/http: TLS handshake timeout
	E0816 22:48:37.984921       1 cidr_allocator.go:137] Failed to list all nodes: Get "https://192.168.116.132:8443/api/v1/nodes": failed to get token for kube-system/node-controller: timed out waiting for the condition
	I0816 22:48:41.894746       1 trace.go:205] Trace[830254540]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:48:31.893) (total time: 10001ms):
	Trace[830254540]: [10.001556034s] [10.001556034s] END
	E0816 22:48:41.894881       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: Get "https://192.168.116.132:8443/api/v1/serviceaccounts?resourceVersion=557": net/http: TLS handshake timeout
	
	* 
	* ==> kube-controller-manager [7b31f609984fc54447e9df1259fba6a55768fcd53cfb0dc5bcfd7400326bdf1c] <==
	* 	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:159 +0x2b1
	
	goroutine 152 [select]:
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitForWithContext(0x51d54d0, 0xc0006fcec0, 0xc000a80630, 0xc00067ea40, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:655 +0x109
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll(0x51d54d0, 0xc0006fcec0, 0xc000a80601, 0xc000a80630, 0xc00067ea40, 0x436b220, 0xc00067ea40)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:591 +0xa5
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext(0x51d54d0, 0xc0006fcec0, 0xdf8475800, 0xc00067ea40, 0xc0000c4660, 0x4c63ab8)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:542 +0x65
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xdf8475800, 0xc00067ea20, 0xc000116360, 0x0, 0x0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:533 +0xa5
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:162 +0x328
	
	goroutine 153 [select]:
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel.func1(0xc000116360, 0xc00067ea30, 0x51d54d0, 0xc0006fcec0)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:298 +0x87
	created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextForChannel
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:297 +0x8c
	
	goroutine 154 [select]:
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1.1(0xc0000c4d20, 0xdf8475800, 0x0, 0x51d54d0, 0xc0006fcf00)
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:705 +0x156
	created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1
		/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:688 +0x96
	
	* 
	* ==> kube-scheduler [22afe5aae0f353525e1906b7f78a924f4ad66ddbf7d5c19646f7627442c6525a] <==
	* I0816 22:47:14.196190       1 trace.go:205] Trace[1250961944]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:47:04.186) (total time: 10009ms):
	Trace[1250961944]: [10.009604348s] [10.009604348s] END
	E0816 22:47:14.196906       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.116.132:8443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	I0816 22:47:14.225698       1 trace.go:205] Trace[1888648226]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:47:04.223) (total time: 10001ms):
	Trace[1888648226]: [10.00178459s] [10.00178459s] END
	E0816 22:47:14.226144       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.116.132:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	I0816 22:47:14.232661       1 trace.go:205] Trace[299216213]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:47:04.230) (total time: 10002ms):
	Trace[299216213]: [10.002064048s] [10.002064048s] END
	E0816 22:47:14.233026       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.116.132:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	I0816 22:47:14.242115       1 trace.go:205] Trace[1298987771]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:134 (16-Aug-2021 22:47:04.241) (total time: 10000ms):
	Trace[1298987771]: [10.000996775s] [10.000996775s] END
	E0816 22:47:14.242652       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.116.132:8443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	E0816 22:47:15.542724       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0816 22:47:15.551260       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0816 22:47:15.588669       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0816 22:47:16.004760       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0816 22:47:16.275306       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0816 22:47:16.276490       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0816 22:47:16.276942       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0816 22:47:16.277332       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0816 22:47:16.277957       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0816 22:47:16.278418       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0816 22:47:16.280622       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0816 22:47:16.281040       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	E0816 22:47:16.281599       1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Mon 2021-08-16 22:46:11 UTC, end at Mon 2021-08-16 22:48:44 UTC. --
	Aug 16 22:47:17 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:17.780135    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:17 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:17.880859    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:17 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:17.981375    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:18 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:18.081941    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:18 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:18.190922    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:18 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:18.292099    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:18 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:18.392347    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:18 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:18.507737    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:18 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:18.608137    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:18 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:18.709364    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:18 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:18.810553    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:18 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:18.910773    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:19 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:19.011642    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:19 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:19.112917    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:19 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:19.215985    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:19 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:19.334195    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:19 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:19.435413    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:19 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:19.536558    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:19 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:19.637798    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:19 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:19.738584    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:19 newest-cni-20210816224431-6986 kubelet[2342]: E0816 22:47:19.839822    2342 kubelet.go:2407] "Error getting node" err="node \"newest-cni-20210816224431-6986\" not found"
	Aug 16 22:47:20 newest-cni-20210816224431-6986 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Aug 16 22:47:20 newest-cni-20210816224431-6986 kubelet[2342]: I0816 22:47:20.216266    2342 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Aug 16 22:47:20 newest-cni-20210816224431-6986 systemd[1]: kubelet.service: Succeeded.
	Aug 16 22:47:20 newest-cni-20210816224431-6986 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0816 22:48:44.083705   22139 logs.go:190] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.0-rc.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: "\n** stderr ** \nUnable to connect to the server: net/http: TLS handshake timeout\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:250: failed logs error: exit status 110
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (84.50s)

                                                
                                    

Test pass (232/269)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 9.31
4 TestDownloadOnly/v1.14.0/preload-exists 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.06
10 TestDownloadOnly/v1.21.3/json-events 6.9
11 TestDownloadOnly/v1.21.3/preload-exists 0
15 TestDownloadOnly/v1.21.3/LogsDuration 0.06
17 TestDownloadOnly/v1.22.0-rc.0/json-events 7.72
18 TestDownloadOnly/v1.22.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.22.0-rc.0/LogsDuration 0.06
23 TestDownloadOnly/DeleteAll 0.22
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.22
26 TestOffline 158.91
29 TestAddons/parallel/Registry 15.42
30 TestAddons/parallel/Ingress 42.45
31 TestAddons/parallel/MetricsServer 5.8
32 TestAddons/parallel/HelmTiller 16.96
33 TestAddons/parallel/Olm 66.61
34 TestAddons/parallel/CSI 89.2
35 TestAddons/parallel/GCPAuth 73.35
36 TestCertOptions 98.93
38 TestForceSystemdFlag 75.23
39 TestForceSystemdEnv 101.09
40 TestKVMDriverInstallOrUpdate 3.68
44 TestErrorSpam/setup 58.87
45 TestErrorSpam/start 0.41
46 TestErrorSpam/status 0.73
47 TestErrorSpam/pause 5.03
48 TestErrorSpam/unpause 1.65
49 TestErrorSpam/stop 5.4
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 109.88
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 25.45
56 TestFunctional/serial/KubeContext 0.04
57 TestFunctional/serial/KubectlGetPods 0.24
60 TestFunctional/serial/CacheCmd/cache/add_remote 4.41
61 TestFunctional/serial/CacheCmd/cache/add_local 2.4
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.05
63 TestFunctional/serial/CacheCmd/cache/list 0.05
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
65 TestFunctional/serial/CacheCmd/cache/cache_reload 2.43
66 TestFunctional/serial/CacheCmd/cache/delete 0.1
67 TestFunctional/serial/MinikubeKubectlCmd 0.11
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
69 TestFunctional/serial/ExtraConfig 39.27
70 TestFunctional/serial/ComponentHealth 0.07
71 TestFunctional/serial/LogsCmd 1.42
72 TestFunctional/serial/LogsFileCmd 1.4
74 TestFunctional/parallel/ConfigCmd 0.4
75 TestFunctional/parallel/DashboardCmd 5.81
76 TestFunctional/parallel/DryRun 0.31
77 TestFunctional/parallel/InternationalLanguage 0.16
78 TestFunctional/parallel/StatusCmd 0.79
81 TestFunctional/parallel/ServiceCmd 24.61
82 TestFunctional/parallel/AddonsCmd 0.14
83 TestFunctional/parallel/PersistentVolumeClaim 50.07
85 TestFunctional/parallel/SSHCmd 0.46
86 TestFunctional/parallel/CpCmd 0.49
87 TestFunctional/parallel/MySQL 28.37
88 TestFunctional/parallel/FileSync 0.28
89 TestFunctional/parallel/CertSync 1.67
93 TestFunctional/parallel/NodeLabels 0.07
94 TestFunctional/parallel/LoadImage 3.34
95 TestFunctional/parallel/RemoveImage 3.65
96 TestFunctional/parallel/LoadImageFromFile 2.45
97 TestFunctional/parallel/BuildImage 5.87
98 TestFunctional/parallel/ListImages 0.31
99 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
102 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
104 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
105 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
109 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
110 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
111 TestFunctional/parallel/MountCmd/any-port 7.78
112 TestFunctional/parallel/ProfileCmd/profile_list 0.31
113 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
114 TestFunctional/parallel/Version/short 0.06
115 TestFunctional/parallel/Version/components 0.95
116 TestFunctional/parallel/MountCmd/specific-port 1.74
117 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
118 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
119 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
120 TestFunctional/delete_busybox_image 0.08
121 TestFunctional/delete_my-image_image 0.04
122 TestFunctional/delete_minikube_cached_images 0.04
126 TestJSONOutput/start/Audit 0
128 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
129 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
131 TestJSONOutput/pause/Audit 0
133 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
134 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
136 TestJSONOutput/unpause/Audit 0
138 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
139 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
141 TestJSONOutput/stop/Audit 0
143 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
144 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
145 TestErrorJSONOutput 0.32
148 TestMainNoArgs 0.05
151 TestMultiNode/serial/FreshStart2Nodes 157.39
152 TestMultiNode/serial/DeployApp2Nodes 5.95
153 TestMultiNode/serial/PingHostFrom2Pods 1.03
154 TestMultiNode/serial/AddNode 56.31
155 TestMultiNode/serial/ProfileList 0.24
156 TestMultiNode/serial/CopyFile 1.81
157 TestMultiNode/serial/StopNode 2.93
158 TestMultiNode/serial/StartAfterStop 73.79
159 TestMultiNode/serial/RestartKeepsNodes 503.29
160 TestMultiNode/serial/DeleteNode 2.28
161 TestMultiNode/serial/StopMultiNode 184.35
162 TestMultiNode/serial/RestartMultiNode 248.37
163 TestMultiNode/serial/ValidateNameConflict 61.66
169 TestDebPackageInstall/install_amd64_debian:sid/minikube 0
170 TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver 11.14
172 TestDebPackageInstall/install_amd64_debian:latest/minikube 0
173 TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver 10.02
175 TestDebPackageInstall/install_amd64_debian:10/minikube 0
176 TestDebPackageInstall/install_amd64_debian:10/kvm2-driver 9.7
178 TestDebPackageInstall/install_amd64_debian:9/minikube 0
179 TestDebPackageInstall/install_amd64_debian:9/kvm2-driver 8.31
181 TestDebPackageInstall/install_amd64_ubuntu:latest/minikube 0
182 TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver 17.04
184 TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube 0
185 TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver 16.2
187 TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube 0
188 TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver 16.48
190 TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube 0
191 TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver 15.37
192 TestPreload 153.14
194 TestScheduledStopUnix 103.76
198 TestRunningBinaryUpgrade 151.58
200 TestKubernetesUpgrade 339.07
203 TestPause/serial/Start 123.58
211 TestNetworkPlugins/group/false 0.39
218 TestPause/serial/Unpause 0.95
220 TestPause/serial/DeletePaused 1.22
221 TestPause/serial/VerifyDeletedResources 15.8
229 TestStoppedBinaryUpgrade/MinikubeLogs 1.34
230 TestNetworkPlugins/group/auto/Start 92.37
231 TestNetworkPlugins/group/kindnet/Start 107.03
232 TestNetworkPlugins/group/cilium/Start 204.11
233 TestNetworkPlugins/group/calico/Start 146.43
234 TestNetworkPlugins/group/auto/KubeletFlags 0.27
235 TestNetworkPlugins/group/auto/NetCatPod 17.66
236 TestNetworkPlugins/group/auto/DNS 0.28
237 TestNetworkPlugins/group/auto/Localhost 0.22
238 TestNetworkPlugins/group/auto/HairPin 0.25
239 TestNetworkPlugins/group/custom-weave/Start 127.17
240 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
241 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
242 TestNetworkPlugins/group/kindnet/NetCatPod 14.66
243 TestNetworkPlugins/group/kindnet/DNS 0.83
244 TestNetworkPlugins/group/kindnet/Localhost 0.26
245 TestNetworkPlugins/group/kindnet/HairPin 0.36
246 TestNetworkPlugins/group/enable-default-cni/Start 85.65
247 TestNetworkPlugins/group/calico/ControllerPod 5.03
248 TestNetworkPlugins/group/calico/KubeletFlags 0.25
249 TestNetworkPlugins/group/calico/NetCatPod 11.89
250 TestNetworkPlugins/group/calico/DNS 0.39
251 TestNetworkPlugins/group/calico/Localhost 0.24
252 TestNetworkPlugins/group/calico/HairPin 0.24
253 TestNetworkPlugins/group/flannel/Start 127.08
254 TestNetworkPlugins/group/cilium/ControllerPod 5.03
255 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
256 TestNetworkPlugins/group/enable-default-cni/NetCatPod 17.7
257 TestNetworkPlugins/group/cilium/KubeletFlags 0.23
258 TestNetworkPlugins/group/cilium/NetCatPod 20.78
259 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.27
260 TestNetworkPlugins/group/custom-weave/NetCatPod 18.76
261 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
262 TestNetworkPlugins/group/enable-default-cni/Localhost 0.23
263 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
264 TestNetworkPlugins/group/bridge/Start 87.99
265 TestNetworkPlugins/group/cilium/DNS 0.36
267 TestStartStop/group/old-k8s-version/serial/FirstStart 159.67
268 TestNetworkPlugins/group/cilium/Localhost 0.21
269 TestNetworkPlugins/group/cilium/HairPin 0.22
271 TestStartStop/group/no-preload/serial/FirstStart 187.87
272 TestNetworkPlugins/group/flannel/ControllerPod 5.03
273 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
274 TestNetworkPlugins/group/flannel/NetCatPod 11.63
275 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
276 TestNetworkPlugins/group/bridge/NetCatPod 11.69
277 TestNetworkPlugins/group/flannel/DNS 0.28
278 TestNetworkPlugins/group/flannel/Localhost 0.3
279 TestNetworkPlugins/group/flannel/HairPin 0.95
280 TestNetworkPlugins/group/bridge/DNS 44.93
282 TestStartStop/group/embed-certs/serial/FirstStart 95.4
283 TestNetworkPlugins/group/bridge/Localhost 0.25
284 TestNetworkPlugins/group/bridge/HairPin 0.25
286 TestStartStop/group/default-k8s-different-port/serial/FirstStart 83.37
287 TestStartStop/group/old-k8s-version/serial/DeployApp 9.67
288 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.5
289 TestStartStop/group/old-k8s-version/serial/Stop 92.54
290 TestStartStop/group/no-preload/serial/DeployApp 11.56
291 TestStartStop/group/embed-certs/serial/DeployApp 10.63
292 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.52
293 TestStartStop/group/no-preload/serial/Stop 98.23
294 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.16
295 TestStartStop/group/embed-certs/serial/Stop 95.16
296 TestStartStop/group/default-k8s-different-port/serial/DeployApp 9.59
297 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 1
298 TestStartStop/group/default-k8s-different-port/serial/Stop 92.23
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.15
300 TestStartStop/group/old-k8s-version/serial/SecondStart 439.33
301 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
302 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
303 TestStartStop/group/no-preload/serial/SecondStart 336.26
304 TestStartStop/group/embed-certs/serial/SecondStart 408.98
305 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.21
306 TestStartStop/group/default-k8s-different-port/serial/SecondStart 387.36
307 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.03
308 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
309 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
311 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.03
312 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
313 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
314 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
316 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
317 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 6.02
318 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
320 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.1
321 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.23
324 TestStartStop/group/newest-cni/serial/FirstStart 85.77
325 TestStartStop/group/newest-cni/serial/DeployApp 0
326 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.61
327 TestStartStop/group/newest-cni/serial/Stop 2.09
328 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.15
329 TestStartStop/group/newest-cni/serial/SecondStart 78.74
330 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
x
+
TestDownloadOnly/v1.14.0/json-events (9.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210816214058-6986 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210816214058-6986 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (9.308872744s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (9.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210816214058-6986
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210816214058-6986: exit status 85 (62.026947ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 21:40:58
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 21:40:58.065379    6998 out.go:298] Setting OutFile to fd 1 ...
	I0816 21:40:58.065547    6998 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:40:58.065555    6998 out.go:311] Setting ErrFile to fd 2...
	I0816 21:40:58.065558    6998 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:40:58.065649    6998 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	W0816 21:40:58.065749    6998 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/config/config.json: no such file or directory
	I0816 21:40:58.067061    6998 out.go:305] Setting JSON to true
	I0816 21:40:58.102209    6998 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":1420,"bootTime":1629148638,"procs":142,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 21:40:58.102296    6998 start.go:121] virtualization: kvm guest
	I0816 21:40:58.104825    6998 notify.go:169] Checking for updates...
	I0816 21:40:58.106547    6998 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 21:40:58.136146    6998 start.go:278] selected driver: kvm2
	I0816 21:40:58.136164    6998 start.go:751] validating driver "kvm2" against <nil>
	I0816 21:40:58.136761    6998 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 21:40:58.136955    6998 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 21:40:58.147555    6998 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0816 21:40:58.147607    6998 start_flags.go:263] no existing cluster config was found, will generate one from the flags 
	I0816 21:40:58.148061    6998 start_flags.go:344] Using suggested 6000MB memory alloc based on sys=32179MB, container=0MB
	I0816 21:40:58.148134    6998 start_flags.go:679] Wait components to verify : map[apiserver:true system_pods:true]
	I0816 21:40:58.148176    6998 cni.go:93] Creating CNI manager for ""
	I0816 21:40:58.148192    6998 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 21:40:58.148201    6998 start_flags.go:272] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0816 21:40:58.148209    6998 start_flags.go:277] config:
	{Name:download-only-20210816214058-6986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210816214058-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 21:40:58.148341    6998 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 21:40:58.150367    6998 download.go:92] Downloading: https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso.sha256 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/iso/minikube-v1.22.0-1628622362-12032.iso
	I0816 21:41:00.684504    6998 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0816 21:41:00.750807    6998 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4
	I0816 21:41:00.750846    6998 cache.go:56] Caching tarball of preloaded images
	I0816 21:41:00.751046    6998 preload.go:131] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0816 21:41:00.753236    6998 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4 ...
	I0816 21:41:00.816979    6998 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:8891d3d5a9795ff90493434142d1724b -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210816214058-6986"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/json-events (6.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210816214058-6986 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210816214058-6986 --force --alsologtostderr --kubernetes-version=v1.21.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (6.903777055s)
--- PASS: TestDownloadOnly/v1.21.3/json-events (6.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/preload-exists
--- PASS: TestDownloadOnly/v1.21.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210816214058-6986
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210816214058-6986: exit status 85 (63.427549ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 21:41:07
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 21:41:07.435360    7034 out.go:298] Setting OutFile to fd 1 ...
	I0816 21:41:07.435427    7034 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:41:07.435430    7034 out.go:311] Setting ErrFile to fd 2...
	I0816 21:41:07.435434    7034 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:41:07.435532    7034 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	W0816 21:41:07.435637    7034 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/config/config.json: no such file or directory
	I0816 21:41:07.435740    7034 out.go:305] Setting JSON to true
	I0816 21:41:07.469433    7034 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":1429,"bootTime":1629148638,"procs":142,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 21:41:07.469538    7034 start.go:121] virtualization: kvm guest
	I0816 21:41:07.472125    7034 notify.go:169] Checking for updates...
	I0816 21:41:07.474284    7034 config.go:177] Loaded profile config "download-only-20210816214058-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.14.0
	W0816 21:41:07.474328    7034 start.go:659] api.Load failed for download-only-20210816214058-6986: filestore "download-only-20210816214058-6986": Docker machine "download-only-20210816214058-6986" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0816 21:41:07.474370    7034 driver.go:335] Setting default libvirt URI to qemu:///system
	W0816 21:41:07.474397    7034 start.go:659] api.Load failed for download-only-20210816214058-6986: filestore "download-only-20210816214058-6986": Docker machine "download-only-20210816214058-6986" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0816 21:41:07.502819    7034 start.go:278] selected driver: kvm2
	I0816 21:41:07.502838    7034 start.go:751] validating driver "kvm2" against &{Name:download-only-20210816214058-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.14.0 ClusterName:download-only-20210816214058-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 21:41:07.503609    7034 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 21:41:07.503757    7034 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 21:41:07.514026    7034 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0816 21:41:07.514760    7034 cni.go:93] Creating CNI manager for ""
	I0816 21:41:07.514777    7034 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 21:41:07.514784    7034 start_flags.go:277] config:
	{Name:download-only-20210816214058-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.21.3 ClusterName:download-only-20210816214058-6986 Namespace:
default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 21:41:07.514867    7034 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 21:41:07.516653    7034 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 21:41:07.580220    7034 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0816 21:41:07.580255    7034 cache.go:56] Caching tarball of preloaded images
	I0816 21:41:07.580422    7034 preload.go:131] Checking if preload exists for k8s version v1.21.3 and runtime containerd
	I0816 21:41:07.582478    7034 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 ...
	I0816 21:41:07.646370    7034 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:6ee74ddc722ac9485c71891d6e62193d -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4
	I0816 21:41:11.960039    7034 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 ...
	I0816 21:41:11.960132    7034 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.21.3-containerd-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210816214058-6986"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.21.3/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/json-events (7.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210816214058-6986 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210816214058-6986 --force --alsologtostderr --kubernetes-version=v1.22.0-rc.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (7.717520483s)
--- PASS: TestDownloadOnly/v1.22.0-rc.0/json-events (7.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.22.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210816214058-6986
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210816214058-6986: exit status 85 (64.463922ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/08/16 21:41:14
	Running on machine: debian-jenkins-agent-3
	Binary: Built with gc go1.16.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0816 21:41:14.405671    7071 out.go:298] Setting OutFile to fd 1 ...
	I0816 21:41:14.405743    7071 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:41:14.405748    7071 out.go:311] Setting ErrFile to fd 2...
	I0816 21:41:14.405751    7071 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:41:14.405865    7071 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	W0816 21:41:14.405965    7071 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/config/config.json: no such file or directory
	I0816 21:41:14.406065    7071 out.go:305] Setting JSON to true
	I0816 21:41:14.440239    7071 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":1436,"bootTime":1629148638,"procs":142,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 21:41:14.440327    7071 start.go:121] virtualization: kvm guest
	I0816 21:41:14.443014    7071 notify.go:169] Checking for updates...
	I0816 21:41:14.445374    7071 config.go:177] Loaded profile config "download-only-20210816214058-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	W0816 21:41:14.445410    7071 start.go:659] api.Load failed for download-only-20210816214058-6986: filestore "download-only-20210816214058-6986": Docker machine "download-only-20210816214058-6986" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0816 21:41:14.445465    7071 driver.go:335] Setting default libvirt URI to qemu:///system
	W0816 21:41:14.445492    7071 start.go:659] api.Load failed for download-only-20210816214058-6986: filestore "download-only-20210816214058-6986": Docker machine "download-only-20210816214058-6986" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0816 21:41:14.473218    7071 start.go:278] selected driver: kvm2
	I0816 21:41:14.473234    7071 start.go:751] validating driver "kvm2" against &{Name:download-only-20210816214058-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.21.3 ClusterName:download-only-20210816214058-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 21:41:14.474012    7071 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 21:41:14.474140    7071 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0816 21:41:14.485117    7071 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.22.0
	I0816 21:41:14.485736    7071 cni.go:93] Creating CNI manager for ""
	I0816 21:41:14.485747    7071 cni.go:163] "kvm2" driver + containerd runtime found, recommending bridge
	I0816 21:41:14.485764    7071 start_flags.go:277] config:
	{Name:download-only-20210816214058-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-rc.0 ClusterName:download-only-20210816214058-6986 Names
pace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 21:41:14.485872    7071 iso.go:123] acquiring lock: {Name:mk4d96b7e9f76537548b4828641f235ae6b81a3f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0816 21:41:14.487909    7071 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 21:41:14.575695    7071 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0816 21:41:14.575738    7071 cache.go:56] Caching tarball of preloaded images
	I0816 21:41:14.575926    7071 preload.go:131] Checking if preload exists for k8s version v1.22.0-rc.0 and runtime containerd
	I0816 21:41:14.578187    7071 preload.go:237] getting checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 ...
	I0816 21:41:14.643755    7071 download.go:92] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:569167d620e883cc7aa194927ed83d26 -> /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4
	I0816 21:41:19.298737    7071 preload.go:247] saving checksum for preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 ...
	I0816 21:41:19.298832    7071 preload.go:254] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-rc.0-containerd-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210816214058-6986"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.0-rc.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20210816214058-6986
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestOffline (158.91s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20210816222224-6986 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20210816222224-6986 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (2m37.731076639s)
helpers_test.go:176: Cleaning up "offline-containerd-20210816222224-6986" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20210816222224-6986
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20210816222224-6986: (1.182596338s)
--- PASS: TestOffline (158.91s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:284: registry stabilized in 19.771021ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-68tdj" [060f16b9-c7f0-4c91-a03e-a87ac17a864a] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:286: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.031214368s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:343: "registry-proxy-9ckln" [20ea28ea-212b-4afb-8c8b-9fb967deff45] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:289: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.015403771s
addons_test.go:294: (dbg) Run:  kubectl --context addons-20210816214122-6986 delete po -l run=registry-test --now

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) Run:  kubectl --context addons-20210816214122-6986 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) Done: kubectl --context addons-20210816214122-6986 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.609160977s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210816214122-6986 ip
2021/08/16 21:44:46 [DEBUG] GET http://192.168.50.28:5000
addons_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210816214122-6986 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.42s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (42.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "ingress-nginx" ...

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "ingress-nginx-admission-create-bm4xz" [23faf5ec-0f1a-4fb7-af07-4712bd97d707] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 12.016695ms
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210816214122-6986 replace --force -f testdata/nginx-ingv1.yaml

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:180: (dbg) Run:  kubectl --context addons-20210816214122-6986 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:343: "nginx" [0cdfc241-bec2-42f8-ac0a-660f424f5be0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:343: "nginx" [0cdfc241-bec2-42f8-ac0a-660f424f5be0] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.017561747s
addons_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210816214122-6986 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210816214122-6986 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210816214122-6986 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:265: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210816214122-6986 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:265: (dbg) Done: out/minikube-linux-amd64 -p addons-20210816214122-6986 addons disable ingress --alsologtostderr -v=1: (29.72848617s)
--- PASS: TestAddons/parallel/Ingress (42.45s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:361: metrics-server stabilized in 23.409923ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:343: "metrics-server-77c99ccb96-whfxz" [e2cb2b8f-baf1-47e3-b572-e75939aad55e] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:363: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.02563552s
addons_test.go:369: (dbg) Run:  kubectl --context addons-20210816214122-6986 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210816214122-6986 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.80s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (16.96s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:410: tiller-deploy stabilized in 2.110744ms
addons_test.go:412: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:343: "tiller-deploy-768d69497-kp82t" [1b7b94c3-2ae7-44c5-a0ff-3fa565343a62] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:412: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.030333896s
addons_test.go:427: (dbg) Run:  kubectl --context addons-20210816214122-6986 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:427: (dbg) Done: kubectl --context addons-20210816214122-6986 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (8.102340453s)
addons_test.go:432: kubectl --context addons-20210816214122-6986 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 
addons_test.go:427: (dbg) Run:  kubectl --context addons-20210816214122-6986 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:427: (dbg) Done: kubectl --context addons-20210816214122-6986 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (2.402427373s)
addons_test.go:444: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210816214122-6986 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (16.96s)

                                                
                                    
x
+
TestAddons/parallel/Olm (66.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: catalog-operator stabilized in 24.224277ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:467: olm-operator stabilized in 26.35718ms
addons_test.go:471: packageserver stabilized in 28.138086ms
addons_test.go:473: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=catalog-operator" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "catalog-operator-75d496484d-bb5hr" [bdc8a970-2418-449b-af5f-153559c5f025] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:473: (dbg) TestAddons/parallel/Olm: app=catalog-operator healthy within 5.025914868s

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=olm-operator" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "olm-operator-859c88c96-r8t92" [beeab995-77c5-40d0-8792-7d40cbc2e0e9] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: (dbg) TestAddons/parallel/Olm: app=olm-operator healthy within 5.011177727s
addons_test.go:479: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=packageserver" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-6b589cb55f-p7x9j" [f49d3924-12ad-4e13-98eb-3cf43d358df7] Running
helpers_test.go:343: "packageserver-6b589cb55f-w87wd" [b9ecdb58-020b-44bc-8bc9-48170bf647f8] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-6b589cb55f-p7x9j" [f49d3924-12ad-4e13-98eb-3cf43d358df7] Running
helpers_test.go:343: "packageserver-6b589cb55f-w87wd" [b9ecdb58-020b-44bc-8bc9-48170bf647f8] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-6b589cb55f-p7x9j" [f49d3924-12ad-4e13-98eb-3cf43d358df7] Running
helpers_test.go:343: "packageserver-6b589cb55f-w87wd" [b9ecdb58-020b-44bc-8bc9-48170bf647f8] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-6b589cb55f-p7x9j" [f49d3924-12ad-4e13-98eb-3cf43d358df7] Running
helpers_test.go:343: "packageserver-6b589cb55f-w87wd" [b9ecdb58-020b-44bc-8bc9-48170bf647f8] Running
helpers_test.go:343: "packageserver-6b589cb55f-p7x9j" [f49d3924-12ad-4e13-98eb-3cf43d358df7] Running
helpers_test.go:343: "packageserver-6b589cb55f-w87wd" [b9ecdb58-020b-44bc-8bc9-48170bf647f8] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:343: "packageserver-6b589cb55f-p7x9j" [f49d3924-12ad-4e13-98eb-3cf43d358df7] Running
addons_test.go:479: (dbg) TestAddons/parallel/Olm: app=packageserver healthy within 5.024222207s
addons_test.go:482: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "olm.catalogSource=operatorhubio-catalog" in namespace "olm" ...
helpers_test.go:343: "operatorhubio-catalog-grv29" [c6af74b7-7114-489e-bd71-eabe168978ba] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:482: (dbg) TestAddons/parallel/Olm: olm.catalogSource=operatorhubio-catalog healthy within 5.009456752s
addons_test.go:487: (dbg) Run:  kubectl --context addons-20210816214122-6986 create -f testdata/etcd.yaml
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210816214122-6986 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210816214122-6986 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210816214122-6986 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210816214122-6986 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210816214122-6986 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210816214122-6986 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210816214122-6986 get csv -n my-etcd
addons_test.go:499: kubectl --context addons-20210816214122-6986 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210816214122-6986 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:494: (dbg) Run:  kubectl --context addons-20210816214122-6986 get csv -n my-etcd
--- PASS: TestAddons/parallel/Olm (66.61s)

                                                
                                    
x
+
TestAddons/parallel/CSI (89.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 15.921662ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-20210816214122-6986 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210816214122-6986 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210816214122-6986 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-20210816214122-6986 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:343: "task-pv-pod" [20cd5a35-8252-4edb-83a4-e80a56fb9f39] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [20cd5a35-8252-4edb-83a4-e80a56fb9f39] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod" [20cd5a35-8252-4edb-83a4-e80a56fb9f39] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 32.020694653s
addons_test.go:549: (dbg) Run:  kubectl --context addons-20210816214122-6986 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210816214122-6986 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:426: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:418: (dbg) Run:  kubectl --context addons-20210816214122-6986 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-20210816214122-6986 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:559: (dbg) Done: kubectl --context addons-20210816214122-6986 delete pod task-pv-pod: (5.905988709s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-20210816214122-6986 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-20210816214122-6986 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210816214122-6986 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:393: (dbg) Run:  kubectl --context addons-20210816214122-6986 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-20210816214122-6986 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:343: "task-pv-pod-restore" [bfdd7b00-5510-43d0-851f-168cfbefc6b4] Pending
helpers_test.go:343: "task-pv-pod-restore" [bfdd7b00-5510-43d0-851f-168cfbefc6b4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:343: "task-pv-pod-restore" [bfdd7b00-5510-43d0-851f-168cfbefc6b4] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 25.023708417s
addons_test.go:591: (dbg) Run:  kubectl --context addons-20210816214122-6986 delete pod task-pv-pod-restore
addons_test.go:591: (dbg) Done: kubectl --context addons-20210816214122-6986 delete pod task-pv-pod-restore: (13.387211175s)
addons_test.go:595: (dbg) Run:  kubectl --context addons-20210816214122-6986 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-20210816214122-6986 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210816214122-6986 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-20210816214122-6986 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.059757248s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210816214122-6986 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (89.20s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (73.35s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:618: (dbg) Run:  kubectl --context addons-20210816214122-6986 create -f testdata/busybox.yaml
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [d613258a-512e-4283-9b0c-34d6815bda9b] Pending
helpers_test.go:343: "busybox" [d613258a-512e-4283-9b0c-34d6815bda9b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "busybox" [d613258a-512e-4283-9b0c-34d6815bda9b] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:624: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 12.020352432s
addons_test.go:630: (dbg) Run:  kubectl --context addons-20210816214122-6986 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:667: (dbg) Run:  kubectl --context addons-20210816214122-6986 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:683: (dbg) Run:  kubectl --context addons-20210816214122-6986 apply -f testdata/private-image.yaml
addons_test.go:690: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:343: "private-image-7ff9c8c74f-sg47x" [3f28ac1c-f9ba-4ba5-933d-1061893ff81c] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-7ff9c8c74f-sg47x" [3f28ac1c-f9ba-4ba5-933d-1061893ff81c] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:690: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image healthy within 17.058561686s
addons_test.go:696: (dbg) Run:  kubectl --context addons-20210816214122-6986 apply -f testdata/private-image-eu.yaml
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image-eu" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-eu-5956d58f9f-mlvvg" [de9e350b-2969-4515-8f9f-794929556ca8] Pending / Ready:ContainersNotReady (containers with unready status: [private-image-eu]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image-eu])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:343: "private-image-eu-5956d58f9f-mlvvg" [de9e350b-2969-4515-8f9f-794929556ca8] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image-eu healthy within 15.018834129s
addons_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210816214122-6986 addons disable gcp-auth --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:709: (dbg) Done: out/minikube-linux-amd64 -p addons-20210816214122-6986 addons disable gcp-auth --alsologtostderr -v=1: (27.562826167s)
--- PASS: TestAddons/parallel/GCPAuth (73.35s)

                                                
                                    
x
+
TestCertOptions (98.93s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20210816222641-6986 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
E0816 22:27:19.465867    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20210816222641-6986 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m37.354302914s)
cert_options_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20210816222641-6986 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:73: (dbg) Run:  kubectl --context cert-options-20210816222641-6986 config view
helpers_test.go:176: Cleaning up "cert-options-20210816222641-6986" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20210816222641-6986
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20210816222641-6986: (1.308168885s)
--- PASS: TestCertOptions (98.93s)

                                                
                                    
x
+
TestForceSystemdFlag (75.23s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20210816222641-6986 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20210816222641-6986 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m13.794707999s)
docker_test.go:113: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20210816222641-6986 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-20210816222641-6986" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20210816222641-6986
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20210816222641-6986: (1.23231171s)
--- PASS: TestForceSystemdFlag (75.23s)

                                                
                                    
x
+
TestForceSystemdEnv (101.09s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20210816222224-6986 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20210816222224-6986 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m39.792526719s)
docker_test.go:113: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20210816222224-6986 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-20210816222224-6986" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20210816222224-6986
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20210816222224-6986: (1.049664377s)
--- PASS: TestForceSystemdEnv (101.09s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.68s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.68s)

                                                
                                    
x
+
TestErrorSpam/setup (58.87s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:78: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210816214758-6986 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210816214758-6986 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:78: (dbg) Done: out/minikube-linux-amd64 start -p nospam-20210816214758-6986 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-20210816214758-6986 --driver=kvm2  --container-runtime=containerd: (58.870413514s)
--- PASS: TestErrorSpam/setup (58.87s)

                                                
                                    
x
+
TestErrorSpam/start (0.41s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:213: Cleaning up 1 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214758-6986 --log_dir /tmp/nospam-20210816214758-6986 start --dry-run
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214758-6986 --log_dir /tmp/nospam-20210816214758-6986 start --dry-run
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214758-6986 --log_dir /tmp/nospam-20210816214758-6986 start --dry-run
--- PASS: TestErrorSpam/start (0.41s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214758-6986 --log_dir /tmp/nospam-20210816214758-6986 status
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214758-6986 --log_dir /tmp/nospam-20210816214758-6986 status
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214758-6986 --log_dir /tmp/nospam-20210816214758-6986 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (5.03s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214758-6986 --log_dir /tmp/nospam-20210816214758-6986 pause
error_spam_test.go:156: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-20210816214758-6986 --log_dir /tmp/nospam-20210816214758-6986 pause: exit status 80 (2.521721686s)

                                                
                                                
-- stdout --
	* Pausing node nospam-20210816214758-6986 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: runc: sudo runc --root /run/containerd/runc/k8s.io pause 0af2a85a115a5b7bc9c74f13d60eb8780a5987888389b0248d7b46e1405910bf 443a2ec8ae41cda0ae893b497794b56adf2f83296c675eee0148db0888dd1efb: Process exited with status 1
	stdout:
	Incorrect Usage.
	
	NAME:
	   runc pause - pause suspends all processes inside the container
	
	USAGE:
	   runc pause <container-id>
	
	Where "<container-id>" is the name for the instance of the container to be
	paused. 
	
	DESCRIPTION:
	   The pause command suspends all processes in the instance of the container.
	
	Use runc list to identify instances of containers and their current status.
	
	stderr:
	time="2021-08-16T21:49:01Z" level=error msg="runc: \"pause\" requires exactly 1 argument(s)"
	
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭───────────────────────────────────────────────────────────────────────────────╮
	│                                                                               │
	│    * If the above advice does not help, please let us know:                   │
	│      https://github.com/kubernetes/minikube/issues/new/choose                 │
	│                                                                               │
	│    * Please attach the following file to the GitHub issue:                    │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log    │
	│                                                                               │
	╰───────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:158: "out/minikube-linux-amd64 -p nospam-20210816214758-6986 --log_dir /tmp/nospam-20210816214758-6986 pause" failed: exit status 80
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214758-6986 --log_dir /tmp/nospam-20210816214758-6986 pause
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210816214758-6986 --log_dir /tmp/nospam-20210816214758-6986 pause: (2.000039806s)
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214758-6986 --log_dir /tmp/nospam-20210816214758-6986 pause
--- PASS: TestErrorSpam/pause (5.03s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214758-6986 --log_dir /tmp/nospam-20210816214758-6986 unpause
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214758-6986 --log_dir /tmp/nospam-20210816214758-6986 unpause
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214758-6986 --log_dir /tmp/nospam-20210816214758-6986 unpause
--- PASS: TestErrorSpam/unpause (1.65s)

                                                
                                    
x
+
TestErrorSpam/stop (5.4s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:213: Cleaning up 0 logfile(s) ...
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214758-6986 --log_dir /tmp/nospam-20210816214758-6986 stop
error_spam_test.go:156: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210816214758-6986 --log_dir /tmp/nospam-20210816214758-6986 stop: (5.260809224s)
error_spam_test.go:156: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214758-6986 --log_dir /tmp/nospam-20210816214758-6986 stop
error_spam_test.go:179: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210816214758-6986 --log_dir /tmp/nospam-20210816214758-6986 stop
--- PASS: TestErrorSpam/stop (5.40s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1606: local sync path: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/files/etc/test/nested/copy/6986/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (109.88s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:1982: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210816214911-6986 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0816 21:49:31.882363    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
E0816 21:49:31.888229    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
E0816 21:49:31.898489    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
E0816 21:49:31.918761    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
E0816 21:49:31.959048    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
E0816 21:49:32.039303    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
E0816 21:49:32.199714    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
E0816 21:49:32.520307    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
E0816 21:49:33.161032    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
E0816 21:49:34.441498    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
E0816 21:49:37.003246    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
E0816 21:49:42.123772    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
E0816 21:49:52.364810    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
E0816 21:50:12.845846    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
E0816 21:50:53.806661    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
functional_test.go:1982: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210816214911-6986 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m49.882123001s)
--- PASS: TestFunctional/serial/StartWithProxy (109.88s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (25.45s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:627: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210816214911-6986 --alsologtostderr -v=8
functional_test.go:627: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210816214911-6986 --alsologtostderr -v=8: (25.449752762s)
functional_test.go:631: soft start took 25.450353148s for "functional-20210816214911-6986" cluster.
--- PASS: TestFunctional/serial/SoftStart (25.45s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:647: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:660: (dbg) Run:  kubectl --context functional-20210816214911-6986 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 cache add k8s.gcr.io/pause:3.1
functional_test.go:982: (dbg) Done: out/minikube-linux-amd64 -p functional-20210816214911-6986 cache add k8s.gcr.io/pause:3.1: (1.23119316s)
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 cache add k8s.gcr.io/pause:3.3
functional_test.go:982: (dbg) Done: out/minikube-linux-amd64 -p functional-20210816214911-6986 cache add k8s.gcr.io/pause:3.3: (1.711315283s)
functional_test.go:982: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 cache add k8s.gcr.io/pause:latest
functional_test.go:982: (dbg) Done: out/minikube-linux-amd64 -p functional-20210816214911-6986 cache add k8s.gcr.io/pause:latest: (1.471000507s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1012: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210816214911-6986 /tmp/functional-20210816214911-6986665869808
functional_test.go:1024: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 cache add minikube-local-cache-test:functional-20210816214911-6986
functional_test.go:1024: (dbg) Done: out/minikube-linux-amd64 -p functional-20210816214911-6986 cache add minikube-local-cache-test:functional-20210816214911-6986: (2.127570794s)
functional_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 cache delete minikube-local-cache-test:functional-20210816214911-6986
functional_test.go:1018: (dbg) Run:  docker rmi minikube-local-cache-test:functional-20210816214911-6986
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1036: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1078: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1084: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (226.616526ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 cache reload
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-20210816214911-6986 cache reload: (1.714427095s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1103: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:678: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 kubectl -- --context functional-20210816214911-6986 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:701: (dbg) Run:  out/kubectl --context functional-20210816214911-6986 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:715: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210816214911-6986 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0816 21:52:15.727816    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
functional_test.go:715: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210816214911-6986 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.270742289s)
functional_test.go:719: restart took 39.270837252s for "functional-20210816214911-6986" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:766: (dbg) Run:  kubectl --context functional-20210816214911-6986 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:780: etcd phase: Running
functional_test.go:790: etcd status: Ready
functional_test.go:780: kube-apiserver phase: Running
functional_test.go:790: kube-apiserver status: Ready
functional_test.go:780: kube-controller-manager phase: Running
functional_test.go:790: kube-controller-manager status: Ready
functional_test.go:780: kube-scheduler phase: Running
functional_test.go:790: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1165: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 logs
functional_test.go:1165: (dbg) Done: out/minikube-linux-amd64 -p functional-20210816214911-6986 logs: (1.420203632s)
--- PASS: TestFunctional/serial/LogsCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1181: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 logs --file /tmp/functional-20210816214911-6986199161487/logs.txt
functional_test.go:1181: (dbg) Done: out/minikube-linux-amd64 -p functional-20210816214911-6986 logs --file /tmp/functional-20210816214911-6986199161487/logs.txt: (1.398156926s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210816214911-6986 config get cpus: exit status 14 (67.747418ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 config set cpus 2
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 config get cpus
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1129: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210816214911-6986 config get cpus: exit status 14 (62.122992ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (5.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:857: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210816214911-6986 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:862: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-20210816214911-6986 --alsologtostderr -v=1] ...

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
helpers_test.go:507: unable to kill pid 11624: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (5.81s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210816214911-6986 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:919: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210816214911-6986 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (154.913702ms)

                                                
                                                
-- stdout --
	* [functional-20210816214911-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 21:52:45.499220   11182 out.go:298] Setting OutFile to fd 1 ...
	I0816 21:52:45.499377   11182 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:52:45.499385   11182 out.go:311] Setting ErrFile to fd 2...
	I0816 21:52:45.499388   11182 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:52:45.499471   11182 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 21:52:45.499707   11182 out.go:305] Setting JSON to false
	I0816 21:52:45.533987   11182 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":2127,"bootTime":1629148638,"procs":185,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 21:52:45.534098   11182 start.go:121] virtualization: kvm guest
	I0816 21:52:45.536124   11182 out.go:177] * [functional-20210816214911-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 21:52:45.537881   11182 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 21:52:45.539307   11182 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 21:52:45.540778   11182 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 21:52:45.542086   11182 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 21:52:45.542531   11182 config.go:177] Loaded profile config "functional-20210816214911-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 21:52:45.542913   11182 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 21:52:45.542984   11182 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 21:52:45.553654   11182 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:38543
	I0816 21:52:45.554066   11182 main.go:130] libmachine: () Calling .GetVersion
	I0816 21:52:45.554617   11182 main.go:130] libmachine: Using API Version  1
	I0816 21:52:45.554638   11182 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 21:52:45.555028   11182 main.go:130] libmachine: () Calling .GetMachineName
	I0816 21:52:45.555223   11182 main.go:130] libmachine: (functional-20210816214911-6986) Calling .DriverName
	I0816 21:52:45.555433   11182 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 21:52:45.555880   11182 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 21:52:45.555920   11182 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 21:52:45.566749   11182 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34865
	I0816 21:52:45.567151   11182 main.go:130] libmachine: () Calling .GetVersion
	I0816 21:52:45.567630   11182 main.go:130] libmachine: Using API Version  1
	I0816 21:52:45.567655   11182 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 21:52:45.568040   11182 main.go:130] libmachine: () Calling .GetMachineName
	I0816 21:52:45.568194   11182 main.go:130] libmachine: (functional-20210816214911-6986) Calling .DriverName
	I0816 21:52:45.600600   11182 out.go:177] * Using the kvm2 driver based on existing profile
	I0816 21:52:45.600623   11182 start.go:278] selected driver: kvm2
	I0816 21:52:45.600628   11182 start.go:751] validating driver "kvm2" against &{Name:functional-20210816214911-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.21.3 ClusterName:functional-20210816214911-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.73 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-poli
cy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 21:52:45.600780   11182 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 21:52:45.603021   11182 out.go:177] 
	W0816 21:52:45.603118   11182 out.go:242] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0816 21:52:45.604402   11182 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:934: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210816214911-6986 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210816214911-6986 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:956: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210816214911-6986 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (164.487344ms)

                                                
                                                
-- stdout --
	* [functional-20210816214911-6986] minikube v1.22.0 sur Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_LOCATION=12230
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 21:52:45.816746   11271 out.go:298] Setting OutFile to fd 1 ...
	I0816 21:52:45.816816   11271 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:52:45.816819   11271 out.go:311] Setting ErrFile to fd 2...
	I0816 21:52:45.816822   11271 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:52:45.816968   11271 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 21:52:45.817177   11271 out.go:305] Setting JSON to false
	I0816 21:52:45.857807   11271 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":2128,"bootTime":1629148638,"procs":194,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 21:52:45.857898   11271 start.go:121] virtualization: kvm guest
	I0816 21:52:45.860239   11271 out.go:177] * [functional-20210816214911-6986] minikube v1.22.0 sur Debian 9.13 (kvm/amd64)
	I0816 21:52:45.861703   11271 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 21:52:45.863234   11271 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 21:52:45.864686   11271 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 21:52:45.866129   11271 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 21:52:45.866562   11271 config.go:177] Loaded profile config "functional-20210816214911-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 21:52:45.866981   11271 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 21:52:45.867039   11271 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 21:52:45.877057   11271 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:34675
	I0816 21:52:45.877485   11271 main.go:130] libmachine: () Calling .GetVersion
	I0816 21:52:45.878053   11271 main.go:130] libmachine: Using API Version  1
	I0816 21:52:45.878074   11271 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 21:52:45.878390   11271 main.go:130] libmachine: () Calling .GetMachineName
	I0816 21:52:45.878582   11271 main.go:130] libmachine: (functional-20210816214911-6986) Calling .DriverName
	I0816 21:52:45.878758   11271 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 21:52:45.879069   11271 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 21:52:45.879099   11271 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 21:52:45.889115   11271 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37943
	I0816 21:52:45.889449   11271 main.go:130] libmachine: () Calling .GetVersion
	I0816 21:52:45.889914   11271 main.go:130] libmachine: Using API Version  1
	I0816 21:52:45.889939   11271 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 21:52:45.890265   11271 main.go:130] libmachine: () Calling .GetMachineName
	I0816 21:52:45.890405   11271 main.go:130] libmachine: (functional-20210816214911-6986) Calling .DriverName
	I0816 21:52:45.920456   11271 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0816 21:52:45.920480   11271 start.go:278] selected driver: kvm2
	I0816 21:52:45.920486   11271 start.go:751] validating driver "kvm2" against &{Name:functional-20210816214911-6986 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/12032/minikube-v1.22.0-1628622362-12032.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.25-1628619379-12032@sha256:937faef407987cbd8b3cb0a90c6c5dfd664817d5377be0b77a4ecbf0f9f9c1b6 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.21.3 ClusterName:functional-20210816214911-6986 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.73 Port:8441 KubernetesVersion:v1.21.3 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-poli
cy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
	I0816 21:52:45.920622   11271 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 21:52:45.922912   11271 out.go:177] 
	W0816 21:52:45.923048   11271 out.go:242] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0816 21:52:45.924424   11271 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:809: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 status
functional_test.go:815: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:826: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (24.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1357: (dbg) Run:  kubectl --context functional-20210816214911-6986 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1363: (dbg) Run:  kubectl --context functional-20210816214911-6986 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:343: "hello-node-6cbfcd7cbc-9tqtt" [24a25062-1701-4f3a-9d28-96738b77630c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:343: "hello-node-6cbfcd7cbc-9tqtt" [24a25062-1701-4f3a-9d28-96738b77630c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1368: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 23.02714059s
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 service list
functional_test.go:1385: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 service --namespace=default --https --url hello-node
functional_test.go:1394: found endpoint: https://192.168.50.73:31203
functional_test.go:1405: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 service hello-node --url --format={{.IP}}
functional_test.go:1414: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 service hello-node --url
functional_test.go:1420: found endpoint for hello-node: http://192.168.50.73:31203
functional_test.go:1431: Attempting to fetch http://192.168.50.73:31203 ...
functional_test.go:1450: http://192.168.50.73:31203: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-6cbfcd7cbc-9tqtt

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.73:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.73:31203
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (24.61s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1465: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 addons list
functional_test.go:1476: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (50.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:343: "storage-provisioner" [713ac73a-19a4-4c1e-a64b-48ac1d785525] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.010080692s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20210816214911-6986 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20210816214911-6986 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210816214911-6986 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210816214911-6986 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210816214911-6986 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [3d779848-f7e7-45ad-a2c9-800fb5654387] Pending
helpers_test.go:343: "sp-pod" [3d779848-f7e7-45ad-a2c9-800fb5654387] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:343: "sp-pod" [3d779848-f7e7-45ad-a2c9-800fb5654387] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.017187988s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20210816214911-6986 exec sp-pod -- touch /tmp/mount/foo

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20210816214911-6986 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20210816214911-6986 delete -f testdata/storage-provisioner/pod.yaml: (10.781344292s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210816214911-6986 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:343: "sp-pod" [e613ae15-32c1-40e9-9905-c6bf99a323ac] Pending
helpers_test.go:343: "sp-pod" [e613ae15-32c1-40e9-9905-c6bf99a323ac] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:343: "sp-pod" [e613ae15-32c1-40e9-9905-c6bf99a323ac] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.009980843s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20210816214911-6986 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (50.07s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1498: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1515: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1546: (dbg) Run:  kubectl --context functional-20210816214911-6986 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-4zj8r" [8a1f4fec-bb4b-44de-b079-3ca30cc94bc7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:343: "mysql-9bbbc5bbb-4zj8r" [8a1f4fec-bb4b-44de-b079-3ca30cc94bc7] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1551: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.011018416s
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210816214911-6986 exec mysql-9bbbc5bbb-4zj8r -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210816214911-6986 exec mysql-9bbbc5bbb-4zj8r -- mysql -ppassword -e "show databases;": exit status 1 (217.133972ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210816214911-6986 exec mysql-9bbbc5bbb-4zj8r -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210816214911-6986 exec mysql-9bbbc5bbb-4zj8r -- mysql -ppassword -e "show databases;": exit status 1 (586.015665ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210816214911-6986 exec mysql-9bbbc5bbb-4zj8r -- mysql -ppassword -e "show databases;"
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210816214911-6986 exec mysql-9bbbc5bbb-4zj8r -- mysql -ppassword -e "show databases;": exit status 1 (435.039664ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210816214911-6986 exec mysql-9bbbc5bbb-4zj8r -- mysql -ppassword -e "show databases;"

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Non-zero exit: kubectl --context functional-20210816214911-6986 exec mysql-9bbbc5bbb-4zj8r -- mysql -ppassword -e "show databases;": exit status 1 (270.958619ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1558: (dbg) Run:  kubectl --context functional-20210816214911-6986 exec mysql-9bbbc5bbb-4zj8r -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.37s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1678: Checking for existence of /etc/test/nested/copy/6986/hosts within VM
functional_test.go:1679: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "sudo cat /etc/test/nested/copy/6986/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1684: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /etc/ssl/certs/6986.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "sudo cat /etc/ssl/certs/6986.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1719: Checking for existence of /usr/share/ca-certificates/6986.pem within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "sudo cat /usr/share/ca-certificates/6986.pem"
functional_test.go:1719: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1720: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "sudo cat /etc/ssl/certs/51391683.0"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1746: Checking for existence of /etc/ssl/certs/69862.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "sudo cat /etc/ssl/certs/69862.pem"
functional_test.go:1746: Checking for existence of /usr/share/ca-certificates/69862.pem within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "sudo cat /usr/share/ca-certificates/69862.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1746: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-20210816214911-6986 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (3.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:239: (dbg) Run:  docker pull busybox:1.33

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:239: (dbg) Done: docker pull busybox:1.33: (1.238382388s)
functional_test.go:246: (dbg) Run:  docker tag busybox:1.33 docker.io/library/busybox:load-functional-20210816214911-6986

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 image load docker.io/library/busybox:load-functional-20210816214911-6986

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p functional-20210816214911-6986 image load docker.io/library/busybox:load-functional-20210816214911-6986: (1.777398595s)
functional_test.go:373: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210816214911-6986 -- sudo crictl inspecti docker.io/library/busybox:load-functional-20210816214911-6986
--- PASS: TestFunctional/parallel/LoadImage (3.34s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (3.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:331: (dbg) Run:  docker pull busybox:1.32

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:331: (dbg) Done: docker pull busybox:1.32: (1.294192681s)
functional_test.go:338: (dbg) Run:  docker tag busybox:1.32 docker.io/library/busybox:remove-functional-20210816214911-6986
functional_test.go:344: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 image load docker.io/library/busybox:remove-functional-20210816214911-6986

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:344: (dbg) Done: out/minikube-linux-amd64 -p functional-20210816214911-6986 image load docker.io/library/busybox:remove-functional-20210816214911-6986: (1.529442412s)
functional_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 image rm docker.io/library/busybox:remove-functional-20210816214911-6986
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210816214911-6986 -- sudo crictl images
--- PASS: TestFunctional/parallel/RemoveImage (3.65s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImageFromFile (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImageFromFile
=== PAUSE TestFunctional/parallel/LoadImageFromFile

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:279: (dbg) Run:  docker pull busybox:1.31

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:279: (dbg) Done: docker pull busybox:1.31: (1.223620818s)
functional_test.go:286: (dbg) Run:  docker tag busybox:1.31 docker.io/library/busybox:load-from-file-functional-20210816214911-6986

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:293: (dbg) Run:  docker save -o busybox.tar docker.io/library/busybox:load-from-file-functional-20210816214911-6986

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/busybox.tar

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImageFromFile
functional_test.go:387: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210816214911-6986 -- sudo crictl images
--- PASS: TestFunctional/parallel/LoadImageFromFile (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (5.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 image build -t localhost/my-image:functional-20210816214911-6986 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-20210816214911-6986 image build -t localhost/my-image:functional-20210816214911-6986 testdata/build: (5.627640767s)
functional_test.go:415: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20210816214911-6986 image build -t localhost/my-image:functional-20210816214911-6986 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile:
#1 transferring dockerfile: 77B done
#1 DONE 0.2s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context:
#2 transferring context: 2B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for docker.io/library/busybox:latest
#3 DONE 1.4s

                                                
                                                
#6 [internal] load build context
#6 transferring context: 62B done
#6 DONE 0.1s

                                                
                                                
#4 [1/3] FROM docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60
#4 resolve docker.io/library/busybox@sha256:0f354ec1728d9ff32edcd7d1b8bbdfc798277ad36120dc3dc683be44524c8b60 0.1s done
#4 sha256:b71f96345d44b237decc0c2d6c2f9ad0d17fde83dad7579608f1f0764d9686f2 630.78kB / 766.61kB 0.2s
#4 extracting sha256:b71f96345d44b237decc0c2d6c2f9ad0d17fde83dad7579608f1f0764d9686f2 0.1s done
#4 DONE 0.3s

                                                
                                                
#5 [2/3] RUN true
#5 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.3s done
#8 exporting manifest sha256:8bebaf8a5191c660a813bdc627594f080e0414cfeb75b7ca5ec5af670d7de13a
#8 exporting manifest sha256:8bebaf8a5191c660a813bdc627594f080e0414cfeb75b7ca5ec5af670d7de13a 0.0s done
#8 exporting config sha256:0e431ff4da64dfd67384fb6a9e42162fc221280c62a2c253ea3b4e51fcec098a 0.0s done
#8 naming to localhost/my-image:functional-20210816214911-6986 done
#8 DONE 0.3s
functional_test.go:373: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210816214911-6986 -- sudo crictl inspecti localhost/my-image:functional-20210816214911-6986
--- PASS: TestFunctional/parallel/BuildImage (5.87s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:441: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 image ls
functional_test.go:446: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210816214911-6986 image ls:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns/coredns:v1.8.0
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-20210816214911-6986
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
docker.io/kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestFunctional/parallel/ListImages (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1774: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "sudo systemctl is-active docker"
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "sudo systemctl is-active docker": exit status 1 (283.589242ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1774: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "sudo systemctl is-active crio"
functional_test.go:1774: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "sudo systemctl is-active crio": exit status 1 (232.45599ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:126: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20210816214911-6986 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:164: (dbg) Run:  kubectl --context functional-20210816214911-6986 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:229: tunnel at http://10.110.74.134 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:364: (dbg) stopping [out/minikube-linux-amd64 -p functional-20210816214911-6986 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1202: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1206: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:76: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210816214911-6986 /tmp/mounttest985378978:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:110: wrote "test-1629150764732139279" to /tmp/mounttest985378978/created-by-test
functional_test_mount_test.go:110: wrote "test-1629150764732139279" to /tmp/mounttest985378978/created-by-test-removed-by-pod
functional_test_mount_test.go:110: wrote "test-1629150764732139279" to /tmp/mounttest985378978/test-1629150764732139279
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (249.637293ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:118: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh -- ls -la /mount-9p

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:136: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 16 21:52 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 16 21:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 16 21:52 test-1629150764732139279
functional_test_mount_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh cat /mount-9p/test-1629150764732139279

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:151: (dbg) Run:  kubectl --context functional-20210816214911-6986 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:343: "busybox-mount" [190ea1db-44ec-423b-a2bd-871080e53842] Pending
helpers_test.go:343: "busybox-mount" [190ea1db-44ec-423b-a2bd-871080e53842] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:343: "busybox-mount" [190ea1db-44ec-423b-a2bd-871080e53842] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:156: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.032451043s
functional_test_mount_test.go:172: (dbg) Run:  kubectl --context functional-20210816214911-6986 logs busybox-mount
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:93: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:97: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210816214911-6986 /tmp/mounttest985378978:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.78s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1240: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1245: Took "255.53049ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1254: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1259: Took "53.320845ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1295: Took "249.675772ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1303: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1308: Took "65.447878ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2003: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2016: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:225: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210816214911-6986 /tmp/mounttest910960025:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:255: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (248.276465ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh -- ls -la /mount-9p
functional_test_mount_test.go:273: guest mount directory contents
total 0
functional_test_mount_test.go:275: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210816214911-6986 /tmp/mounttest910960025:/mount-9p --alsologtostderr -v=1 --port 46464] ...
2021/08/16 21:52:53 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:276: reading mount text
functional_test_mount_test.go:290: done reading mount text
functional_test_mount_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:242: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh "sudo umount -f /mount-9p": exit status 1 (242.68945ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:244: "out/minikube-linux-amd64 -p functional-20210816214911-6986 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:246: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210816214911-6986 /tmp/mounttest910960025:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1865: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210816214911-6986 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:183: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210816214911-6986
functional_test.go:188: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210816214911-6986
--- PASS: TestFunctional/delete_busybox_image (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210816214911-6986
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210816214911-6986
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:146: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20210816215441-6986 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:146: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20210816215441-6986 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (89.160584ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20210816215441-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"cd9aaa23-1c22-4f83-a573-232d729650ac","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig"},"datacontenttype":"application/json","id":"0e87d1b7-c25e-4e6e-941d-4148f41f0b3c","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"a14f80f3-0e40-4c05-9fa2-402e79364624","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube"},"datacontenttype":"application/json","id":"4e0405b3-f206-46cc-92db-093a84a7e3cf","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=12230"},"datacontenttype":"application/json","id":"07fa482f-f085-455d-b4b9-dfc2de562088","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"b35ce780-0f27-46ae-9a8a-1bf0752bc648","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-20210816215441-6986" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20210816215441-6986
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (157.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210816215441-6986 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0816 21:54:59.568076    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
multinode_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210816215441-6986 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m36.980283494s)
multinode_test.go:87: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215441-6986 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (157.39s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:462: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215441-6986 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
E0816 21:57:19.466248    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
E0816 21:57:19.471564    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
E0816 21:57:19.481857    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
E0816 21:57:19.502129    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
E0816 21:57:19.542439    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
E0816 21:57:19.622749    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
E0816 21:57:19.783162    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215441-6986 -- rollout status deployment/busybox
E0816 21:57:20.103412    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
E0816 21:57:20.744355    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
E0816 21:57:22.024891    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
multinode_test.go:467: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20210816215441-6986 -- rollout status deployment/busybox: (3.606697603s)
multinode_test.go:473: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215441-6986 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:485: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215441-6986 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215441-6986 -- exec busybox-84b6686758-lc7lz -- nslookup kubernetes.io
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215441-6986 -- exec busybox-84b6686758-llzlw -- nslookup kubernetes.io
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215441-6986 -- exec busybox-84b6686758-lc7lz -- nslookup kubernetes.default
E0816 21:57:24.585942    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215441-6986 -- exec busybox-84b6686758-llzlw -- nslookup kubernetes.default
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215441-6986 -- exec busybox-84b6686758-lc7lz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215441-6986 -- exec busybox-84b6686758-llzlw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.95s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215441-6986 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215441-6986 -- exec busybox-84b6686758-lc7lz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215441-6986 -- exec busybox-84b6686758-lc7lz -- sh -c "ping -c 1 192.168.50.1"
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215441-6986 -- exec busybox-84b6686758-llzlw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:537: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210816215441-6986 -- exec busybox-84b6686758-llzlw -- sh -c "ping -c 1 192.168.50.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (56.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:106: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210816215441-6986 -v 3 --alsologtostderr
E0816 21:57:29.706174    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
E0816 21:57:39.946921    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
E0816 21:58:00.428088    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
multinode_test.go:106: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20210816215441-6986 -v 3 --alsologtostderr: (55.738820199s)
multinode_test.go:112: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215441-6986 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (56.31s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:128: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (1.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:169: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215441-6986 status --output json --alsologtostderr
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215441-6986 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215441-6986 ssh "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215441-6986 cp testdata/cp-test.txt multinode-20210816215441-6986-m02:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215441-6986 ssh -n multinode-20210816215441-6986-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:535: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215441-6986 cp testdata/cp-test.txt multinode-20210816215441-6986-m03:/home/docker/cp-test.txt
helpers_test.go:549: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215441-6986 ssh -n multinode-20210816215441-6986-m03 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestMultiNode/serial/CopyFile (1.81s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:191: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215441-6986 node stop m03
multinode_test.go:191: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210816215441-6986 node stop m03: (2.089628206s)
multinode_test.go:197: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215441-6986 status
multinode_test.go:197: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210816215441-6986 status: exit status 7 (421.992824ms)

                                                
                                                
-- stdout --
	multinode-20210816215441-6986
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210816215441-6986-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210816215441-6986-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215441-6986 status --alsologtostderr
multinode_test.go:204: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210816215441-6986 status --alsologtostderr: exit status 7 (420.461966ms)

                                                
                                                
-- stdout --
	multinode-20210816215441-6986
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210816215441-6986-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210816215441-6986-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 21:58:27.277784   14525 out.go:298] Setting OutFile to fd 1 ...
	I0816 21:58:27.277966   14525 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:58:27.277975   14525 out.go:311] Setting ErrFile to fd 2...
	I0816 21:58:27.277980   14525 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 21:58:27.278075   14525 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 21:58:27.278231   14525 out.go:305] Setting JSON to false
	I0816 21:58:27.278254   14525 mustload.go:65] Loading cluster: multinode-20210816215441-6986
	I0816 21:58:27.278534   14525 config.go:177] Loaded profile config "multinode-20210816215441-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 21:58:27.278545   14525 status.go:253] checking status of multinode-20210816215441-6986 ...
	I0816 21:58:27.278895   14525 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 21:58:27.278960   14525 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 21:58:27.289347   14525 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:44737
	I0816 21:58:27.289726   14525 main.go:130] libmachine: () Calling .GetVersion
	I0816 21:58:27.290271   14525 main.go:130] libmachine: Using API Version  1
	I0816 21:58:27.290292   14525 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 21:58:27.290684   14525 main.go:130] libmachine: () Calling .GetMachineName
	I0816 21:58:27.290873   14525 main.go:130] libmachine: (multinode-20210816215441-6986) Calling .GetState
	I0816 21:58:27.293849   14525 status.go:328] multinode-20210816215441-6986 host status = "Running" (err=<nil>)
	I0816 21:58:27.293863   14525 host.go:66] Checking if "multinode-20210816215441-6986" exists ...
	I0816 21:58:27.294140   14525 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 21:58:27.294169   14525 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 21:58:27.305060   14525 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:33607
	I0816 21:58:27.305456   14525 main.go:130] libmachine: () Calling .GetVersion
	I0816 21:58:27.305889   14525 main.go:130] libmachine: Using API Version  1
	I0816 21:58:27.305914   14525 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 21:58:27.306218   14525 main.go:130] libmachine: () Calling .GetMachineName
	I0816 21:58:27.306369   14525 main.go:130] libmachine: (multinode-20210816215441-6986) Calling .GetIP
	I0816 21:58:27.311390   14525 main.go:130] libmachine: (multinode-20210816215441-6986) DBG | domain multinode-20210816215441-6986 has defined MAC address 52:54:00:db:a5:36 in network mk-multinode-20210816215441-6986
	I0816 21:58:27.311758   14525 main.go:130] libmachine: (multinode-20210816215441-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a5:36", ip: ""} in network mk-multinode-20210816215441-6986: {Iface:virbr2 ExpiryTime:2021-08-16 22:54:56 +0000 UTC Type:0 Mac:52:54:00:db:a5:36 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:multinode-20210816215441-6986 Clientid:01:52:54:00:db:a5:36}
	I0816 21:58:27.311779   14525 main.go:130] libmachine: (multinode-20210816215441-6986) DBG | domain multinode-20210816215441-6986 has defined IP address 192.168.50.125 and MAC address 52:54:00:db:a5:36 in network mk-multinode-20210816215441-6986
	I0816 21:58:27.311936   14525 host.go:66] Checking if "multinode-20210816215441-6986" exists ...
	I0816 21:58:27.312222   14525 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 21:58:27.312255   14525 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 21:58:27.322295   14525 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:46263
	I0816 21:58:27.322673   14525 main.go:130] libmachine: () Calling .GetVersion
	I0816 21:58:27.323159   14525 main.go:130] libmachine: Using API Version  1
	I0816 21:58:27.323177   14525 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 21:58:27.323476   14525 main.go:130] libmachine: () Calling .GetMachineName
	I0816 21:58:27.323642   14525 main.go:130] libmachine: (multinode-20210816215441-6986) Calling .DriverName
	I0816 21:58:27.323786   14525 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 21:58:27.323835   14525 main.go:130] libmachine: (multinode-20210816215441-6986) Calling .GetSSHHostname
	I0816 21:58:27.328368   14525 main.go:130] libmachine: (multinode-20210816215441-6986) DBG | domain multinode-20210816215441-6986 has defined MAC address 52:54:00:db:a5:36 in network mk-multinode-20210816215441-6986
	I0816 21:58:27.328691   14525 main.go:130] libmachine: (multinode-20210816215441-6986) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:a5:36", ip: ""} in network mk-multinode-20210816215441-6986: {Iface:virbr2 ExpiryTime:2021-08-16 22:54:56 +0000 UTC Type:0 Mac:52:54:00:db:a5:36 Iaid: IPaddr:192.168.50.125 Prefix:24 Hostname:multinode-20210816215441-6986 Clientid:01:52:54:00:db:a5:36}
	I0816 21:58:27.328718   14525 main.go:130] libmachine: (multinode-20210816215441-6986) DBG | domain multinode-20210816215441-6986 has defined IP address 192.168.50.125 and MAC address 52:54:00:db:a5:36 in network mk-multinode-20210816215441-6986
	I0816 21:58:27.328828   14525 main.go:130] libmachine: (multinode-20210816215441-6986) Calling .GetSSHPort
	I0816 21:58:27.328971   14525 main.go:130] libmachine: (multinode-20210816215441-6986) Calling .GetSSHKeyPath
	I0816 21:58:27.329122   14525 main.go:130] libmachine: (multinode-20210816215441-6986) Calling .GetSSHUsername
	I0816 21:58:27.329259   14525 sshutil.go:53] new ssh client: &{IP:192.168.50.125 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215441-6986/id_rsa Username:docker}
	I0816 21:58:27.423351   14525 ssh_runner.go:149] Run: systemctl --version
	I0816 21:58:27.428807   14525 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 21:58:27.440997   14525 kubeconfig.go:93] found "multinode-20210816215441-6986" server: "https://192.168.50.125:8443"
	I0816 21:58:27.441018   14525 api_server.go:164] Checking apiserver status ...
	I0816 21:58:27.441043   14525 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0816 21:58:27.450967   14525 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/2673/cgroup
	I0816 21:58:27.458056   14525 api_server.go:180] apiserver freezer: "2:freezer:/kubepods/burstable/pod00e85d4801b1a03fe557f7038f415922/ac619a7cac5b65a49adff2a254fe3fd2e355e36d29e56923e46517dee61d633a"
	I0816 21:58:27.458116   14525 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod00e85d4801b1a03fe557f7038f415922/ac619a7cac5b65a49adff2a254fe3fd2e355e36d29e56923e46517dee61d633a/freezer.state
	I0816 21:58:27.465173   14525 api_server.go:202] freezer state: "THAWED"
	I0816 21:58:27.465195   14525 api_server.go:239] Checking apiserver healthz at https://192.168.50.125:8443/healthz ...
	I0816 21:58:27.471200   14525 api_server.go:265] https://192.168.50.125:8443/healthz returned 200:
	ok
	I0816 21:58:27.471221   14525 status.go:419] multinode-20210816215441-6986 apiserver status = Running (err=<nil>)
	I0816 21:58:27.471230   14525 status.go:255] multinode-20210816215441-6986 status: &{Name:multinode-20210816215441-6986 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 21:58:27.471245   14525 status.go:253] checking status of multinode-20210816215441-6986-m02 ...
	I0816 21:58:27.471540   14525 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 21:58:27.471571   14525 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 21:58:27.482534   14525 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36657
	I0816 21:58:27.483006   14525 main.go:130] libmachine: () Calling .GetVersion
	I0816 21:58:27.483445   14525 main.go:130] libmachine: Using API Version  1
	I0816 21:58:27.483463   14525 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 21:58:27.483764   14525 main.go:130] libmachine: () Calling .GetMachineName
	I0816 21:58:27.483952   14525 main.go:130] libmachine: (multinode-20210816215441-6986-m02) Calling .GetState
	I0816 21:58:27.487028   14525 status.go:328] multinode-20210816215441-6986-m02 host status = "Running" (err=<nil>)
	I0816 21:58:27.487046   14525 host.go:66] Checking if "multinode-20210816215441-6986-m02" exists ...
	I0816 21:58:27.487360   14525 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 21:58:27.487391   14525 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 21:58:27.497930   14525 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43673
	I0816 21:58:27.498321   14525 main.go:130] libmachine: () Calling .GetVersion
	I0816 21:58:27.498754   14525 main.go:130] libmachine: Using API Version  1
	I0816 21:58:27.498774   14525 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 21:58:27.499140   14525 main.go:130] libmachine: () Calling .GetMachineName
	I0816 21:58:27.499349   14525 main.go:130] libmachine: (multinode-20210816215441-6986-m02) Calling .GetIP
	I0816 21:58:27.504500   14525 main.go:130] libmachine: (multinode-20210816215441-6986-m02) DBG | domain multinode-20210816215441-6986-m02 has defined MAC address 52:54:00:21:8f:84 in network mk-multinode-20210816215441-6986
	I0816 21:58:27.504888   14525 main.go:130] libmachine: (multinode-20210816215441-6986-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:8f:84", ip: ""} in network mk-multinode-20210816215441-6986: {Iface:virbr2 ExpiryTime:2021-08-16 22:56:36 +0000 UTC Type:0 Mac:52:54:00:21:8f:84 Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:multinode-20210816215441-6986-m02 Clientid:01:52:54:00:21:8f:84}
	I0816 21:58:27.504918   14525 main.go:130] libmachine: (multinode-20210816215441-6986-m02) DBG | domain multinode-20210816215441-6986-m02 has defined IP address 192.168.50.235 and MAC address 52:54:00:21:8f:84 in network mk-multinode-20210816215441-6986
	I0816 21:58:27.505026   14525 host.go:66] Checking if "multinode-20210816215441-6986-m02" exists ...
	I0816 21:58:27.505489   14525 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 21:58:27.505533   14525 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 21:58:27.516826   14525 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:36049
	I0816 21:58:27.517210   14525 main.go:130] libmachine: () Calling .GetVersion
	I0816 21:58:27.517575   14525 main.go:130] libmachine: Using API Version  1
	I0816 21:58:27.517606   14525 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 21:58:27.517927   14525 main.go:130] libmachine: () Calling .GetMachineName
	I0816 21:58:27.518087   14525 main.go:130] libmachine: (multinode-20210816215441-6986-m02) Calling .DriverName
	I0816 21:58:27.518249   14525 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0816 21:58:27.518268   14525 main.go:130] libmachine: (multinode-20210816215441-6986-m02) Calling .GetSSHHostname
	I0816 21:58:27.523253   14525 main.go:130] libmachine: (multinode-20210816215441-6986-m02) DBG | domain multinode-20210816215441-6986-m02 has defined MAC address 52:54:00:21:8f:84 in network mk-multinode-20210816215441-6986
	I0816 21:58:27.523648   14525 main.go:130] libmachine: (multinode-20210816215441-6986-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:8f:84", ip: ""} in network mk-multinode-20210816215441-6986: {Iface:virbr2 ExpiryTime:2021-08-16 22:56:36 +0000 UTC Type:0 Mac:52:54:00:21:8f:84 Iaid: IPaddr:192.168.50.235 Prefix:24 Hostname:multinode-20210816215441-6986-m02 Clientid:01:52:54:00:21:8f:84}
	I0816 21:58:27.523685   14525 main.go:130] libmachine: (multinode-20210816215441-6986-m02) DBG | domain multinode-20210816215441-6986-m02 has defined IP address 192.168.50.235 and MAC address 52:54:00:21:8f:84 in network mk-multinode-20210816215441-6986
	I0816 21:58:27.523795   14525 main.go:130] libmachine: (multinode-20210816215441-6986-m02) Calling .GetSSHPort
	I0816 21:58:27.523937   14525 main.go:130] libmachine: (multinode-20210816215441-6986-m02) Calling .GetSSHKeyPath
	I0816 21:58:27.524071   14525 main.go:130] libmachine: (multinode-20210816215441-6986-m02) Calling .GetSSHUsername
	I0816 21:58:27.524193   14525 sshutil.go:53] new ssh client: &{IP:192.168.50.235 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/machines/multinode-20210816215441-6986-m02/id_rsa Username:docker}
	I0816 21:58:27.619898   14525 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0816 21:58:27.630815   14525 status.go:255] multinode-20210816215441-6986-m02 status: &{Name:multinode-20210816215441-6986-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0816 21:58:27.630852   14525 status.go:253] checking status of multinode-20210816215441-6986-m03 ...
	I0816 21:58:27.631340   14525 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 21:58:27.631394   14525 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 21:58:27.642909   14525 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:35247
	I0816 21:58:27.643360   14525 main.go:130] libmachine: () Calling .GetVersion
	I0816 21:58:27.643880   14525 main.go:130] libmachine: Using API Version  1
	I0816 21:58:27.643900   14525 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 21:58:27.644150   14525 main.go:130] libmachine: () Calling .GetMachineName
	I0816 21:58:27.644300   14525 main.go:130] libmachine: (multinode-20210816215441-6986-m03) Calling .GetState
	I0816 21:58:27.647207   14525 status.go:328] multinode-20210816215441-6986-m03 host status = "Stopped" (err=<nil>)
	I0816 21:58:27.647223   14525 status.go:341] host is not running, skipping remaining checks
	I0816 21:58:27.647230   14525 status.go:255] multinode-20210816215441-6986-m03 status: &{Name:multinode-20210816215441-6986-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.93s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (73.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:235: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215441-6986 node start m03 --alsologtostderr
E0816 21:58:41.388429    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
E0816 21:59:31.879874    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
multinode_test.go:235: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210816215441-6986 node start m03 --alsologtostderr: (1m13.167198206s)
multinode_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215441-6986 status
multinode_test.go:256: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (73.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (503.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:264: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210816215441-6986
multinode_test.go:271: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-20210816215441-6986
E0816 22:00:03.309280    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
E0816 22:02:19.466372    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
E0816 22:02:47.149527    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
multinode_test.go:271: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-20210816215441-6986: (3m6.212639808s)
multinode_test.go:276: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210816215441-6986 --wait=true -v=8 --alsologtostderr
E0816 22:04:31.878845    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
E0816 22:05:54.929363    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
E0816 22:07:19.466979    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
multinode_test.go:276: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210816215441-6986 --wait=true -v=8 --alsologtostderr: (5m16.976725445s)
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210816215441-6986
--- PASS: TestMultiNode/serial/RestartKeepsNodes (503.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:375: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215441-6986 node delete m03
multinode_test.go:375: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210816215441-6986 node delete m03: (1.628478615s)
multinode_test.go:381: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215441-6986 status --alsologtostderr
multinode_test.go:405: (dbg) Run:  kubectl get nodes
multinode_test.go:413: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (184.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215441-6986 stop
E0816 22:09:31.880268    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210816215441-6986 stop: (3m4.190483356s)
multinode_test.go:301: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215441-6986 status
multinode_test.go:301: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210816215441-6986 status: exit status 7 (83.088694ms)

                                                
                                                
-- stdout --
	multinode-20210816215441-6986
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210816215441-6986-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215441-6986 status --alsologtostderr
multinode_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210816215441-6986 status --alsologtostderr: exit status 7 (79.105073ms)

                                                
                                                
-- stdout --
	multinode-20210816215441-6986
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210816215441-6986-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 22:11:11.337153   15764 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:11:11.337665   15764 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:11:11.337680   15764 out.go:311] Setting ErrFile to fd 2...
	I0816 22:11:11.337686   15764 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:11:11.337942   15764 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:11:11.338409   15764 out.go:305] Setting JSON to false
	I0816 22:11:11.338435   15764 mustload.go:65] Loading cluster: multinode-20210816215441-6986
	I0816 22:11:11.338781   15764 config.go:177] Loaded profile config "multinode-20210816215441-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:11:11.338793   15764 status.go:253] checking status of multinode-20210816215441-6986 ...
	I0816 22:11:11.339165   15764 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:11:11.339224   15764 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:11:11.349492   15764 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:37677
	I0816 22:11:11.349924   15764 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:11:11.350442   15764 main.go:130] libmachine: Using API Version  1
	I0816 22:11:11.350464   15764 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:11:11.350815   15764 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:11:11.350998   15764 main.go:130] libmachine: (multinode-20210816215441-6986) Calling .GetState
	I0816 22:11:11.353688   15764 status.go:328] multinode-20210816215441-6986 host status = "Stopped" (err=<nil>)
	I0816 22:11:11.353702   15764 status.go:341] host is not running, skipping remaining checks
	I0816 22:11:11.353708   15764 status.go:255] multinode-20210816215441-6986 status: &{Name:multinode-20210816215441-6986 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0816 22:11:11.353730   15764 status.go:253] checking status of multinode-20210816215441-6986-m02 ...
	I0816 22:11:11.354023   15764 main.go:130] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0816 22:11:11.354064   15764 main.go:130] libmachine: Launching plugin server for driver kvm2
	I0816 22:11:11.364173   15764 main.go:130] libmachine: Plugin server listening at address 127.0.0.1:43567
	I0816 22:11:11.364583   15764 main.go:130] libmachine: () Calling .GetVersion
	I0816 22:11:11.364966   15764 main.go:130] libmachine: Using API Version  1
	I0816 22:11:11.364986   15764 main.go:130] libmachine: () Calling .SetConfigRaw
	I0816 22:11:11.365325   15764 main.go:130] libmachine: () Calling .GetMachineName
	I0816 22:11:11.365482   15764 main.go:130] libmachine: (multinode-20210816215441-6986-m02) Calling .GetState
	I0816 22:11:11.368038   15764 status.go:328] multinode-20210816215441-6986-m02 host status = "Stopped" (err=<nil>)
	I0816 22:11:11.368052   15764 status.go:341] host is not running, skipping remaining checks
	I0816 22:11:11.368057   15764 status.go:255] multinode-20210816215441-6986-m02 status: &{Name:multinode-20210816215441-6986-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (184.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (248.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:335: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210816215441-6986 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0816 22:12:19.466148    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
E0816 22:13:42.510649    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
E0816 22:14:31.878522    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
multinode_test.go:335: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210816215441-6986 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (4m7.82524923s)
multinode_test.go:341: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210816215441-6986 status --alsologtostderr
multinode_test.go:355: (dbg) Run:  kubectl get nodes
multinode_test.go:363: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (248.37s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (61.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:424: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210816215441-6986
multinode_test.go:433: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210816215441-6986-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:433: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20210816215441-6986-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (99.350521ms)

                                                
                                                
-- stdout --
	* [multinode-20210816215441-6986-m02] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_LOCATION=12230
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210816215441-6986-m02' is duplicated with machine name 'multinode-20210816215441-6986-m02' in profile 'multinode-20210816215441-6986'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210816215441-6986-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:441: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210816215441-6986-m03 --driver=kvm2  --container-runtime=containerd: (1m0.133112145s)
multinode_test.go:448: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210816215441-6986
multinode_test.go:448: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20210816215441-6986: exit status 80 (230.807562ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210816215441-6986
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210816215441-6986-m03 already exists in multinode-20210816215441-6986-m03 profile
	* 
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	[warning]: invalid value provided to Color, using default
	╭─────────────────────────────────────────────────────────────────────────────╮
	│                                                                             │
	│    * If the above advice does not help, please let us know:                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose               │
	│                                                                             │
	│    * Please attach the following file to the GitHub issue:                  │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:453: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20210816215441-6986-m03
multinode_test.go:453: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20210816215441-6986-m03: (1.148062118s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (61.66s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (11.14s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (11.143282297s)
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (11.14s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (10.02s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (10.017074676s)
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (10.02s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (9.7s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (9.694927255s)
--- PASS: TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (9.70s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.31s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (8.309280185s)
--- PASS: TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.31s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (17.04s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
E0816 22:17:19.469445    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (17.044732564s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (17.04s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (16.2s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (16.204466946s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (16.20s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (16.48s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (16.482399854s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (16.48s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (15.37s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/KVM_Linux_containerd_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.22.0-0_amd64.deb": (15.371541977s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (15.37s)

                                                
                                    
x
+
TestPreload (153.14s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210816221807-6986 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.17.0
E0816 22:19:31.878697    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210816221807-6986 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.17.0: (1m37.691870434s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210816221807-6986 -- sudo crictl pull busybox
preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20210816221807-6986 -- sudo crictl pull busybox: (2.461793014s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210816221807-6986 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.17.3
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210816221807-6986 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.17.3: (51.599279111s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210816221807-6986 -- sudo crictl image ls
helpers_test.go:176: Cleaning up "test-preload-20210816221807-6986" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20210816221807-6986
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20210816221807-6986: (1.153309829s)
--- PASS: TestPreload (153.14s)

                                                
                                    
x
+
TestScheduledStopUnix (103.76s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20210816222040-6986 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20210816222040-6986 --memory=2048 --driver=kvm2  --container-runtime=containerd: (1m4.682488129s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210816222040-6986 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20210816222040-6986 -n scheduled-stop-20210816222040-6986
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210816222040-6986 --schedule 8s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210816222040-6986 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210816222040-6986 -n scheduled-stop-20210816222040-6986
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210816222040-6986
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210816222040-6986 --schedule 5s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0816 22:22:19.466378    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210816222040-6986
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-20210816222040-6986: exit status 7 (68.43591ms)

                                                
                                                
-- stdout --
	scheduled-stop-20210816222040-6986
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210816222040-6986 -n scheduled-stop-20210816222040-6986
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210816222040-6986 -n scheduled-stop-20210816222040-6986: exit status 7 (65.896983ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-20210816222040-6986" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20210816222040-6986
--- PASS: TestScheduledStopUnix (103.76s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (151.58s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.16.0.252787608.exe start -p running-upgrade-20210816222503-6986 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.16.0.252787608.exe start -p running-upgrade-20210816222503-6986 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m17.896489333s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20210816222503-6986 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20210816222503-6986 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m12.013305475s)
helpers_test.go:176: Cleaning up "running-upgrade-20210816222503-6986" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20210816222503-6986
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20210816222503-6986: (1.31678907s)
--- PASS: TestRunningBinaryUpgrade (151.58s)

                                                
                                    
x
+
TestKubernetesUpgrade (339.07s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210816222225-6986 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0816 22:22:34.930915    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:224: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210816222225-6986 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (2m22.900312368s)
version_upgrade_test.go:229: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210816222225-6986
version_upgrade_test.go:229: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210816222225-6986: (5.632820641s)
version_upgrade_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20210816222225-6986 status --format={{.Host}}
version_upgrade_test.go:234: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20210816222225-6986 status --format={{.Host}}: exit status 7 (82.593342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:236: status error: exit status 7 (may be ok)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210816222225-6986 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:245: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210816222225-6986 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (2m33.53531096s)
version_upgrade_test.go:250: (dbg) Run:  kubectl --context kubernetes-upgrade-20210816222225-6986 version --output=json
version_upgrade_test.go:269: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:271: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210816222225-6986 --memory=2200 --kubernetes-version=v1.14.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:271: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210816222225-6986 --memory=2200 --kubernetes-version=v1.14.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (130.819406ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210816222225-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_LOCATION=12230
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.0-rc.0 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20210816222225-6986
	    minikube start -p kubernetes-upgrade-20210816222225-6986 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210816222225-69862 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.22.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210816222225-6986 --kubernetes-version=v1.22.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:275: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210816222225-6986 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:277: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210816222225-6986 --memory=2200 --kubernetes-version=v1.22.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (35.350814153s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-20210816222225-6986" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210816222225-6986
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210816222225-6986: (1.310536753s)
--- PASS: TestKubernetesUpgrade (339.07s)

                                                
                                    
x
+
TestPause/serial/Start (123.58s)

                                                
                                                
=== RUN   TestPause/serial/Start

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210816222224-6986 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210816222224-6986 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (2m3.577347979s)
--- PASS: TestPause/serial/Start (123.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:213: (dbg) Run:  out/minikube-linux-amd64 start -p false-20210816222225-6986 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:213: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20210816222225-6986 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (140.461205ms)

                                                
                                                
-- stdout --
	* [false-20210816222225-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	  - MINIKUBE_LOCATION=12230
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0816 22:22:25.061120    9357 out.go:298] Setting OutFile to fd 1 ...
	I0816 22:22:25.061189    9357 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:22:25.061192    9357 out.go:311] Setting ErrFile to fd 2...
	I0816 22:22:25.061195    9357 out.go:345] TERM=,COLORTERM=, which probably does not support color
	I0816 22:22:25.061300    9357 root.go:313] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/bin
	I0816 22:22:25.061549    9357 out.go:305] Setting JSON to false
	I0816 22:22:25.096613    9357 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-3","uptime":3907,"bootTime":1629148638,"procs":157,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-16-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0816 22:22:25.096707    9357 start.go:121] virtualization: kvm guest
	I0816 22:22:25.099569    9357 out.go:177] * [false-20210816222225-6986] minikube v1.22.0 on Debian 9.13 (kvm/amd64)
	I0816 22:22:25.099661    9357 notify.go:169] Checking for updates...
	I0816 22:22:25.101221    9357 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/kubeconfig
	I0816 22:22:25.102924    9357 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0816 22:22:25.104475    9357 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube
	I0816 22:22:25.105913    9357 out.go:177]   - MINIKUBE_LOCATION=12230
	I0816 22:22:25.106368    9357 config.go:177] Loaded profile config "force-systemd-env-20210816222224-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:22:25.106460    9357 config.go:177] Loaded profile config "offline-containerd-20210816222224-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:22:25.106524    9357 config.go:177] Loaded profile config "pause-20210816222224-6986": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.3
	I0816 22:22:25.106560    9357 driver.go:335] Setting default libvirt URI to qemu:///system
	I0816 22:22:25.145307    9357 out.go:177] * Using the kvm2 driver based on user configuration
	I0816 22:22:25.145331    9357 start.go:278] selected driver: kvm2
	I0816 22:22:25.145336    9357 start.go:751] validating driver "kvm2" against <nil>
	I0816 22:22:25.145351    9357 start.go:762] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	I0816 22:22:25.147518    9357 out.go:177] 
	W0816 22:22:25.147607    9357 out.go:242] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0816 22:22:25.149168    9357 out.go:177] 

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "false-20210816222225-6986" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-20210816222225-6986
--- PASS: TestNetworkPlugins/group/false (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:118: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20210816222224-6986 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.95s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.22s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20210816222224-6986 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20210816222224-6986 --alsologtostderr -v=5: (1.215075209s)
--- PASS: TestPause/serial/DeletePaused (1.22s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.8s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:139: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:139: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.804273084s)
--- PASS: TestPause/serial/VerifyDeletedResources (15.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:208: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20210816222405-6986
version_upgrade_test.go:208: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-20210816222405-6986: (1.343485855s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (92.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20210816222224-6986 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p auto-20210816222224-6986 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=kvm2  --container-runtime=containerd: (1m32.36791472s)
--- PASS: TestNetworkPlugins/group/auto/Start (92.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (107.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20210816222225-6986 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20210816222225-6986 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m47.034028285s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (107.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (204.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20210816222225-6986 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20210816222225-6986 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=kvm2  --container-runtime=containerd: (3m24.114286073s)
--- PASS: TestNetworkPlugins/group/cilium/Start (204.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (146.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20210816222225-6986 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p calico-20210816222225-6986 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=kvm2  --container-runtime=containerd: (2m26.431606911s)
--- PASS: TestNetworkPlugins/group/calico/Start (146.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20210816222224-6986 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (17.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context auto-20210816222224-6986 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-2k2tv" [148fd0fe-752c-4cfd-8096-e1922ce4f893] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-2k2tv" [148fd0fe-752c-4cfd-8096-e1922ce4f893] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 17.016072353s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (17.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:162: (dbg) Run:  kubectl --context auto-20210816222224-6986 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:181: (dbg) Run:  kubectl --context auto-20210816222224-6986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:231: (dbg) Run:  kubectl --context auto-20210816222224-6986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (127.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20210816222225-6986 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=kvm2  --container-runtime=containerd
E0816 22:29:31.878977    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20210816222225-6986 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=kvm2  --container-runtime=containerd: (2m7.171962154s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (127.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:343: "kindnet-nbk8k" [5ef491cb-acf7-413b-95da-ebcdf4410b72] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.023383909s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20210816222225-6986 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context kindnet-20210816222225-6986 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-fsg8p" [07b00c0c-9888-46e2-9cf2-34a0d712ba1f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-fsg8p" [07b00c0c-9888-46e2-9cf2-34a0d712ba1f] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.039961298s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20210816222225-6986 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:181: (dbg) Run:  kubectl --context kindnet-20210816222225-6986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:231: (dbg) Run:  kubectl --context kindnet-20210816222225-6986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20210816222225-6986 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
E0816 22:30:22.511077    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20210816222225-6986 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m25.652173977s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:343: "calico-node-kl7rq" [299cd6d8-3888-4a10-88f8-a5c88d708790] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.029753365s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-20210816222225-6986 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context calico-20210816222225-6986 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-gzvkj" [d1c38e2b-b505-4f63-86bb-b05bf07a3b06] Pending
helpers_test.go:343: "netcat-66fbc655d5-gzvkj" [d1c38e2b-b505-4f63-86bb-b05bf07a3b06] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:343: "netcat-66fbc655d5-gzvkj" [d1c38e2b-b505-4f63-86bb-b05bf07a3b06] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.026712223s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:162: (dbg) Run:  kubectl --context calico-20210816222225-6986 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:181: (dbg) Run:  kubectl --context calico-20210816222225-6986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:231: (dbg) Run:  kubectl --context calico-20210816222225-6986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (127.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-20210816222225-6986 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p flannel-20210816222225-6986 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=flannel --driver=kvm2  --container-runtime=containerd: (2m7.081511526s)
--- PASS: TestNetworkPlugins/group/flannel/Start (127.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:343: "cilium-2xllk" [91d44a66-5795-4caf-afb6-4285e9807cdd] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.030300489s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20210816222225-6986 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (17.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context enable-default-cni-20210816222225-6986 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-77pxd" [894ddcb7-f53f-4976-962d-d59231d359fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-77pxd" [894ddcb7-f53f-4976-962d-d59231d359fc] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 17.010101006s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (17.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20210816222225-6986 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (20.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context cilium-20210816222225-6986 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-zvc65" [26bb7472-c2bc-4f30-b331-8494c8cf93a7] Pending

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-zvc65" [26bb7472-c2bc-4f30-b331-8494c8cf93a7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-zvc65" [26bb7472-c2bc-4f30-b331-8494c8cf93a7] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 20.016055481s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (20.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20210816222225-6986 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (18.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context custom-weave-20210816222225-6986 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-56p7s" [9046183b-1ad7-4d79-8795-ce29ed2ce22b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-56p7s" [9046183b-1ad7-4d79-8795-ce29ed2ce22b] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 18.021061704s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (18.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210816222225-6986 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:181: (dbg) Run:  kubectl --context enable-default-cni-20210816222225-6986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:231: (dbg) Run:  kubectl --context enable-default-cni-20210816222225-6986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20210816222225-6986 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20210816222225-6986 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m27.992510136s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:162: (dbg) Run:  kubectl --context cilium-20210816222225-6986 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (159.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210816223154-6986 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210816223154-6986 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.14.0: (2m39.672897167s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (159.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:181: (dbg) Run:  kubectl --context cilium-20210816222225-6986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:231: (dbg) Run:  kubectl --context cilium-20210816222225-6986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (187.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210816223156-6986 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0
E0816 22:32:19.466431    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210816223156-6986 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (3m7.870026139s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (187.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-system" ...
helpers_test.go:343: "kube-flannel-ds-amd64-bhzxr" [03ca03df-4aa0-47c0-bdb6-0d0183afcec0] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.028292522s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-20210816222225-6986 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context flannel-20210816222225-6986 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-cf6tq" [ce64a624-2884-4f3c-8774-c96e96cd800d] Pending

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-cf6tq" [ce64a624-2884-4f3c-8774-c96e96cd800d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-cf6tq" [ce64a624-2884-4f3c-8774-c96e96cd800d] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.021023667s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20210816222225-6986 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context bridge-20210816222225-6986 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:343: "netcat-66fbc655d5-qv4kp" [5156ec07-2ebb-42e5-b946-a829a0476593] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
helpers_test.go:343: "netcat-66fbc655d5-qv4kp" [5156ec07-2ebb-42e5-b946-a829a0476593] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.03063277s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:162: (dbg) Run:  kubectl --context flannel-20210816222225-6986 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:181: (dbg) Run:  kubectl --context flannel-20210816222225-6986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:231: (dbg) Run:  kubectl --context flannel-20210816222225-6986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (44.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210816222225-6986 exec deployment/netcat -- nslookup kubernetes.default

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20210816222225-6986 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.318735869s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210816222225-6986 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:162: (dbg) Non-zero exit: kubectl --context bridge-20210816222225-6986 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.289806619s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210816222225-6986 exec deployment/netcat -- nslookup kubernetes.default
E0816 22:34:08.368752    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816222224-6986/client.crt: no such file or directory
E0816 22:34:08.374170    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816222224-6986/client.crt: no such file or directory
E0816 22:34:08.384432    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816222224-6986/client.crt: no such file or directory
E0816 22:34:08.404747    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816222224-6986/client.crt: no such file or directory
E0816 22:34:08.445072    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816222224-6986/client.crt: no such file or directory
E0816 22:34:08.525416    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816222224-6986/client.crt: no such file or directory
E0816 22:34:08.685839    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816222224-6986/client.crt: no such file or directory
E0816 22:34:09.006399    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816222224-6986/client.crt: no such file or directory
E0816 22:34:09.647347    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816222224-6986/client.crt: no such file or directory
E0816 22:34:10.927559    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816222224-6986/client.crt: no such file or directory
E0816 22:34:13.488347    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816222224-6986/client.crt: no such file or directory
net_test.go:162: (dbg) Done: kubectl --context bridge-20210816222225-6986 exec deployment/netcat -- nslookup kubernetes.default: (11.520436215s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (44.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (95.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210816223333-6986 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.21.3

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210816223333-6986 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.21.3: (1m35.395128393s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (95.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:181: (dbg) Run:  kubectl --context bridge-20210816222225-6986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:231: (dbg) Run:  kubectl --context bridge-20210816222225-6986 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.25s)
E0816 22:45:45.706117    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816223156-6986/client.crt: no such file or directory
E0816 22:45:47.190607    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210816222225-6986/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (83.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210816223418-6986 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.21.3
E0816 22:34:28.849494    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816222224-6986/client.crt: no such file or directory
E0816 22:34:31.878598    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210816223418-6986 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.21.3: (1m23.367283085s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (83.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210816223154-6986 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [21c522af-fee2-11eb-bb5b-525400bf2371] Pending
helpers_test.go:343: "busybox" [21c522af-fee2-11eb-bb5b-525400bf2371] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [21c522af-fee2-11eb-bb5b-525400bf2371] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.01853012s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210816223154-6986 exec busybox -- /bin/sh -c "ulimit -n"
E0816 22:34:43.987145    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816222225-6986/client.crt: no such file or directory
E0816 22:34:43.992423    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816222225-6986/client.crt: no such file or directory
E0816 22:34:44.002698    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816222225-6986/client.crt: no such file or directory
E0816 22:34:44.023265    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816222225-6986/client.crt: no such file or directory
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20210816223154-6986 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0816 22:34:44.064178    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816222225-6986/client.crt: no such file or directory
E0816 22:34:44.144807    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816222225-6986/client.crt: no such file or directory
E0816 22:34:44.305204    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816222225-6986/client.crt: no such file or directory
E0816 22:34:44.625471    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816222225-6986/client.crt: no such file or directory
E0816 22:34:45.266380    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816222225-6986/client.crt: no such file or directory
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20210816223154-6986 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.256085935s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context old-k8s-version-20210816223154-6986 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20210816223154-6986 --alsologtostderr -v=3
E0816 22:34:46.546922    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816222225-6986/client.crt: no such file or directory
E0816 22:34:49.107164    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816222225-6986/client.crt: no such file or directory
E0816 22:34:49.329647    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816222224-6986/client.crt: no such file or directory
E0816 22:34:54.227367    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816222225-6986/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20210816223154-6986 --alsologtostderr -v=3: (1m32.536576455s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (11.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210816223156-6986 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [bd9036c2-345f-4efb-b29f-332c8672b9ae] Pending
helpers_test.go:343: "busybox" [bd9036c2-345f-4efb-b29f-332c8672b9ae] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0816 22:35:06.655529    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816222225-6986/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
helpers_test.go:343: "busybox" [bd9036c2-345f-4efb-b29f-332c8672b9ae] Running

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 11.023547532s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210816223156-6986 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (11.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210816223333-6986 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [9071dbb4-1780-4879-b5a5-30dc20b73dfe] Pending

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
helpers_test.go:343: "busybox" [9071dbb4-1780-4879-b5a5-30dc20b73dfe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [9071dbb4-1780-4879-b5a5-30dc20b73dfe] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.02836535s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210816223333-6986 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20210816223156-6986 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20210816223156-6986 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.332804864s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context no-preload-20210816223156-6986 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (98.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20210816223156-6986 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20210816223156-6986 --alsologtostderr -v=3: (1m38.229819773s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (98.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20210816223333-6986 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20210816223333-6986 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.028293944s)
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context embed-certs-20210816223333-6986 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (95.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20210816223333-6986 --alsologtostderr -v=3
E0816 22:35:27.136250    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816222225-6986/client.crt: no such file or directory
E0816 22:35:30.290157    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816222224-6986/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20210816223333-6986 --alsologtostderr -v=3: (1m35.158505159s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (95.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210816223418-6986 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:343: "busybox" [dff7f930-659b-4204-ba70-a68735639a52] Pending
helpers_test.go:343: "busybox" [dff7f930-659b-4204-ba70-a68735639a52] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:343: "busybox" [dff7f930-659b-4204-ba70-a68735639a52] Running
E0816 22:35:47.191052    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210816222225-6986/client.crt: no such file or directory
E0816 22:35:47.196315    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210816222225-6986/client.crt: no such file or directory
E0816 22:35:47.206549    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210816222225-6986/client.crt: no such file or directory
E0816 22:35:47.226780    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210816222225-6986/client.crt: no such file or directory
E0816 22:35:47.267034    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210816222225-6986/client.crt: no such file or directory
E0816 22:35:47.347275    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210816222225-6986/client.crt: no such file or directory
E0816 22:35:47.507664    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210816222225-6986/client.crt: no such file or directory
E0816 22:35:47.828082    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210816222225-6986/client.crt: no such file or directory
E0816 22:35:48.469170    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210816222225-6986/client.crt: no such file or directory
E0816 22:35:49.750030    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210816222225-6986/client.crt: no such file or directory
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 9.023387277s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210816223418-6986 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20210816223418-6986 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0816 22:35:52.311089    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210816222225-6986/client.crt: no such file or directory
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context default-k8s-different-port-20210816223418-6986 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (92.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20210816223418-6986 --alsologtostderr -v=3
E0816 22:35:57.432215    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:07.672765    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:08.096601    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816222225-6986/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20210816223418-6986 --alsologtostderr -v=3: (1m32.224904253s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (92.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210816223154-6986 -n old-k8s-version-20210816223154-6986
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210816223154-6986 -n old-k8s-version-20210816223154-6986: exit status 7 (64.975041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20210816223154-6986 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (439.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210816223154-6986 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.14.0
E0816 22:36:28.153893    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:28.587801    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:28.593060    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:28.603312    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:28.623552    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:28.663835    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:28.744237    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:28.905119    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:29.226077    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:29.867056    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:31.147926    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:33.075759    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:33.081027    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:33.091241    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:33.111479    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:33.151726    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:33.232071    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:33.392458    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:33.709106    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:33.713251    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:34.353738    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:35.453420    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:35.458690    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:35.468893    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:35.489133    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:35.529378    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:35.609659    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:35.634846    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:35.770137    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:36.090596    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:36.731525    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:38.012618    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:38.195519    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:38.829403    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:40.573259    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:43.316008    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:45.694156    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:49.070335    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816222225-6986/client.crt: no such file or directory
E0816 22:36:52.210889    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816222224-6986/client.crt: no such file or directory
E0816 22:36:53.556547    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816222225-6986/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210816223154-6986 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.14.0: (7m19.018738821s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210816223154-6986 -n old-k8s-version-20210816223154-6986
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (439.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210816223156-6986 -n no-preload-20210816223156-6986

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210816223156-6986 -n no-preload-20210816223156-6986: exit status 7 (83.025081ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20210816223156-6986 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210816223333-6986 -n embed-certs-20210816223333-6986

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210816223333-6986 -n embed-certs-20210816223333-6986: exit status 7 (77.302301ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20210816223333-6986 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (336.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210816223156-6986 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210816223156-6986 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (5m35.907326983s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210816223156-6986 -n no-preload-20210816223156-6986
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (336.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (408.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210816223333-6986 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.21.3
E0816 22:36:55.935088    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816222225-6986/client.crt: no such file or directory
E0816 22:37:09.114266    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210816222225-6986/client.crt: no such file or directory
E0816 22:37:09.550534    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816222225-6986/client.crt: no such file or directory
E0816 22:37:14.622553    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816222225-6986/client.crt: no such file or directory
E0816 22:37:16.416242    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816222225-6986/client.crt: no such file or directory
E0816 22:37:19.466311    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210816223333-6986 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.21.3: (6m48.714161233s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210816223333-6986 -n embed-certs-20210816223333-6986
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (408.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210816223418-6986 -n default-k8s-different-port-20210816223418-6986
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210816223418-6986 -n default-k8s-different-port-20210816223418-6986: exit status 7 (97.080445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20210816223418-6986 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (387.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210816223418-6986 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.21.3
E0816 22:37:30.017327    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816222225-6986/client.crt: no such file or directory
E0816 22:37:50.511798    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816222225-6986/client.crt: no such file or directory
E0816 22:37:55.583732    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816222225-6986/client.crt: no such file or directory
E0816 22:37:57.377101    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:13.448259    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/flannel-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:13.453553    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/flannel-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:13.463889    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/flannel-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:13.484150    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/flannel-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:13.524446    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/flannel-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:13.605309    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/flannel-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:13.766402    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/flannel-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:14.087548    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/flannel-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:14.728132    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/flannel-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:16.008755    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/flannel-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:18.569455    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/flannel-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:20.861716    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:20.867035    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:20.877318    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:20.897595    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:20.937876    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:21.018254    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:21.178672    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:21.499468    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:22.140442    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:23.420649    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:23.690253    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/flannel-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:25.981519    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:31.034579    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:31.101721    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:33.931141    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/flannel-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:41.342405    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816222225-6986/client.crt: no such file or directory
E0816 22:38:54.411551    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/flannel-20210816222225-6986/client.crt: no such file or directory
E0816 22:39:01.823565    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816222225-6986/client.crt: no such file or directory
E0816 22:39:08.368744    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816222224-6986/client.crt: no such file or directory
E0816 22:39:12.432479    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816222225-6986/client.crt: no such file or directory
E0816 22:39:14.932080    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
E0816 22:39:17.504695    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816222225-6986/client.crt: no such file or directory
E0816 22:39:19.297494    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816222225-6986/client.crt: no such file or directory
E0816 22:39:31.878599    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory
E0816 22:39:35.372164    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/flannel-20210816222225-6986/client.crt: no such file or directory
E0816 22:39:36.051585    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816222224-6986/client.crt: no such file or directory
E0816 22:39:42.784586    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816222225-6986/client.crt: no such file or directory
E0816 22:39:43.975982    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816222225-6986/client.crt: no such file or directory
E0816 22:40:13.857522    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/kindnet-20210816222225-6986/client.crt: no such file or directory
E0816 22:40:47.190731    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210816222225-6986/client.crt: no such file or directory
E0816 22:40:57.292396    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/flannel-20210816222225-6986/client.crt: no such file or directory
E0816 22:41:04.705783    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/bridge-20210816222225-6986/client.crt: no such file or directory
E0816 22:41:14.875321    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/calico-20210816222225-6986/client.crt: no such file or directory
E0816 22:41:28.588072    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816222225-6986/client.crt: no such file or directory
E0816 22:41:33.075153    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816222225-6986/client.crt: no such file or directory
E0816 22:41:35.453397    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816222225-6986/client.crt: no such file or directory
E0816 22:41:56.273274    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816222225-6986/client.crt: no such file or directory
E0816 22:42:01.344973    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816222225-6986/client.crt: no such file or directory
E0816 22:42:03.138422    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/custom-weave-20210816222225-6986/client.crt: no such file or directory
E0816 22:42:19.466066    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210816223418-6986 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.21.3: (6m26.948268832s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210816223418-6986 -n default-k8s-different-port-20210816223418-6986
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (387.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-4vvg9" [1295b024-f6d7-4bfa-b763-0c0bee43cb71] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.026813417s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-4vvg9" [1295b024-f6d7-4bfa-b763-0c0bee43cb71] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010873528s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context no-preload-20210816223156-6986 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20210816223156-6986 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-mblkl" [4bdd50b5-fee3-11eb-bea8-525400bf2371] Running
E0816 22:43:41.133564    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/flannel-20210816222225-6986/client.crt: no such file or directory
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018367608s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-5d8978d65d-mblkl" [4bdd50b5-fee3-11eb-bea8-525400bf2371] Running

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011227968s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context old-k8s-version-20210816223154-6986 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-6fqh5" [1375c060-0e73-47e5-b599-6d7e58617b31] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015478751s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20210816223154-6986 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-6fqh5" [1375c060-0e73-47e5-b599-6d7e58617b31] Running

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009135006s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context embed-certs-20210816223333-6986 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-7gstk" [806d3966-d956-400b-b825-eb1393026138] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-7gstk" [806d3966-d956-400b-b825-eb1393026138] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.020801032s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20210816223333-6986 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:343: "kubernetes-dashboard-6fcdf4f6d-7gstk" [806d3966-d956-400b-b825-eb1393026138] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016195083s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context default-k8s-different-port-20210816223418-6986 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20210816223418-6986 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:277: Found non-minikube image: library/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (85.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210816224431-6986 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0
E0816 22:44:31.878594    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/addons-20210816214122-6986/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210816224431-6986 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (1m25.767584137s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (85.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20210816224431-6986 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:178: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20210816224431-6986 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.607340125s)
start_stop_delete_test.go:184: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20210816224431-6986 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20210816224431-6986 --alsologtostderr -v=3: (2.094778381s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210816224431-6986 -n newest-cni-20210816224431-6986
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210816224431-6986 -n newest-cni-20210816224431-6986: exit status 7 (66.9242ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20210816224431-6986 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (78.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210816224431-6986 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0
E0816 22:46:26.666387    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/no-preload-20210816223156-6986/client.crt: no such file or directory
E0816 22:46:28.587150    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/cilium-20210816222225-6986/client.crt: no such file or directory
E0816 22:46:33.075281    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/enable-default-cni-20210816222225-6986/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210816224431-6986 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.22.0-rc.0: (1m18.488301733s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210816224431-6986 -n newest-cni-20210816224431-6986
E0816 22:47:19.466188    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/functional-20210816214911-6986/client.crt: no such file or directory
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (78.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:246: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:257: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20210816224431-6986 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    

Test skip (28/269)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.21.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.21.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.21.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.21.3/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.21.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-rc.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:212: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:115: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:188: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:467: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:527: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:43: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:286: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:88: Skipping the test as containerd container runtimes requires CNI
helpers_test.go:176: Cleaning up "kubenet-20210816222224-6986" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-20210816222224-6986
--- SKIP: TestNetworkPlugins/group/kubenet (0.30s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:91: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-20210816223418-6986" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20210816223418-6986
E0816 22:34:18.609207    6986 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-kvm2-containerd-12230-3544-b85c4fe0fcec6d00161b49ecbfd8182c89122b1a/.minikube/profiles/auto-20210816222224-6986/client.crt: no such file or directory
--- SKIP: TestStartStop/group/disable-driver-mounts (0.26s)

                                                
                                    
Copied to clipboard